Is the IP Bill holed below the water line?

The big privacy and policy story of the day in the UK is the publication of the Intelligence and Security Committee’s report on the draft Investigatory Powers Bill – which is currently being pushed through an abbreviated parliamentary process.

The Bill’s authors get a rough ride from the committee. This is from the ISC Chairman’s covering press release:

“Taken as a whole, the draft Bill fails to deliver the clarity that is so badly needed in this area. The issues under consideration are undoubtedly complex, however it has been evident that even those working on the legislation have not always been clear as to what the provisions are intended to achieve. The draft Bill appears to have suffered from a lack of sufficient time and preparation.”

Here’s the ISC’s site with a link to the report itself:

http://isc.independent.gov.uk/news-archive/9february2016

And here are a couple of good, short pieces of analysis from reliable tech/policy commenters:

Ian Dunt (politics.co.uk)

http://www.politics.co.uk/blogs/2016/02/09/bad-day-for-britain-s-spies-as-intelligence-watchdog-grows-t

Glyn Moody (arstechnica.co.uk)

http://arstechnica.co.uk/tech-policy/2016/02/previously-tame-uk-parliament-watchdog-rips-into-new-snoopers-charter/

The Bill is criticised in almost every respect:

  • it doesn’t achieve its stated goal of bringing all the interception powers into a single statutory instrument;
  • it fails to bring clarity to the purpose and goals of the policy it embodies;
  • it does not include operational justifications, without which parliament cannot decide on its adequacy – and will not include them until after it is expected to be passed into law;
  • its provisions for privacy protection are piecemeal and unclear;
  • the safe guards applied to use of comunications data are “inconsistent and largely incomprehensible”.

In other words, it has been drafted in haste, by people some of whom don’t know – or can’t articulate – what it is supposed to do. As a result, it is confusing and grants over-broad powers with insufficient safeguards.

If the Bill were to be passed as is, the ISC’s report would offer a ready supply of ammunition to anyone seeking to challenge it on grounds of necessity, proportionality and legal certainty.

For the ISC’s report to be so frankly critical is somewhat unexpected. Under its previous chairman, the committee said little, and what little it did say consisted of bland reassurances that the security and intelligence services were doing a fine job. (See “pelted with marshmallows“, from just over two years ago…).

This Bill has been rushed through an abbreviated consultation period: the Home Secretary used the November Paris attacks to justify shortening the normal parliamentary process. The Bill’s consultation committee was given about 3 weeks of parliamentary time to conduct its expert witness hearings and consider any written evidence submitted, either side of the Christmas/New Year parliamentary recess. It is due to publish its own report on Thursday.

This puts the consultation committee in an interesting position. If its report is less critical than that of the Intelligence and Security Committee (which is, after all, the specialist in this area), its credibility will be called into question.  If its report is equally critical, the Bill itself will be even more deeply discredited.

Advertisements

The IP Bill: a repeat offence.

The current UK government in general, and the Home Secretary in particular, is beginning to exhibit a ‘repeat offending’ pattern. Here’s how it goes:

  1. Propose more intrusive powers of surveillance, data collection and retention;
  2. Fail to convince relevant stakeholders (coalition partners, electorate, courts);
  3. Spot a convenient bandwagon onto which to hitch the policy;
  4. Using (3), short-cut parliamentary process to push legislation through without proper scrutiny.

In some instances there is a Step 5, at which the legislation is ruled disproportionate, struck down, etc., but paradoxically that is then used as another reason to rush further knee-jerk legislation through with inadequate parliamentary scrutiny.

So, for instance, the DRIP Bill – rammed through parliament in a shamelessly farcical 8 days – was announced as an “emergency” measure because the government had failed to care that (over a period of months) international bodies from the UN to the European Parliament and, finally and critically, the European Court of Justice, had looked at what it was doing and found that it clearly violated the principles of necessity and proportionality.

However, the next piece of ‘repeat offender’ behaviour is that this government will on no account stop doing what it wants to do simply because that has been ruled illegal. Instead, it will change the law to make what it wants to do legal.

It will frame its legislative ambitions in terms of supposedly incontrovertible case studies: anyone who objects to the “itemised phone bill” Theresa May is asking for would clearly prefer babies to die, terrorists to win, and civilisation to crumble into ruins. This isn’t about anything as petty as your personal privacy; it’s about the very safety of society as we know it.

There are two small problems with the Home Secretary’s approach.

First, if this issue is as existentially important as she makes out, how dare she deny the IP Bill proper parliamentary scrutiny? There can only be one reason to push a 300-page, technical Bill through in three weeks, and that is to stop anyone understanding its contents.

Second, the approach she is asking for demonstrably has not worked up to now, and cannot work in future. US intelligence services say they are “drowning in data”; what they already collect is beyond their capacity to process usefully. But the Home Secretary wants her agencies to have more. The very attacks on Paris which the Home Secretary cites are a stark reminder that intrusive online surveillance (of the very kinds France has enacted) do not stop committed attackers, even if those attackers communicate in clear via SMS and social media sites.

“Legislate in haste, repent at leisure”, they say. But the IP Bill isn’t even bad because it’s done in haste. It’s just the government’s latest attempt to get its pet surveillance project onto the books; it’s premeditatedly bad, and this government shows every sign of being incapable of repentance.

The tragedy is, this time they might just get away with it.

 

 

 

 

MPs successfully challenge UK government in High Court

Two MPs, David Davis (Conservative) and Tom Watson (Labour), with human rights advocacy group Liberty, have won a challenge in the High Court against the UK’s data retention law (DRIPA). The ruling finds against the UK law on several grounds, including incompatibility with the right to privacy, and a lack of clarity in the rules applying to access requests. Since the current rules allow a very wide range of public authorities to request communications data, and to do so without independent approval, there is a clear implication that the UK’s approach to data retention fails the proportionality test.

It’s worth stressing just how exceptional it is for members of parliament to challenge primary legislation in court, let alone to win. After all, the first place primary legislation should be challenged is in the parliament to which they have been elected. But DRIPA’s passage through parliament was not normal. It was rushed through parliament with a single day’s debate – essentially ensuring that it could not receive proper parliamentary scrutiny.
The government tried to explained away this haste by claiming that the bill was “emergency legislation”, brought in because the prior legislation had, in turn, been ruled non-compliant with EU law. That ruling was itself delivered well in advance of the presentation of DRIPA to the house: there was no excuse for the UK government to have failed to foresee that requirement.
According to the Security Minister, John Hayes, opposition to the government’s data retention plans comes from a “paranoid liberal bourgeoisie”, intent on hamstringing law enforcement even if it means criminals and terrorists run amok. Of course, the accusation of paranoia might carry more weight if every “paranoid” prediction about the government’s use and abuse of surveillance powers hadn’t been accurately pointed out by the late Caspar Bowden and amply proved by the Snowden disclosures.
Mr Hayes says the government ‘disagrees absolutely’ with this judgment and fully intends to appeal against it. In other words, despite having had DRIPA and its predecessor struck down, the government’s view is that no change of course is needed, and they can keep the same data retention powers if they go on saying that’s what they want.
There’s an old quip, isn’t there, which defines insanity as repeating the same actions while expecting a different outcome. Far be it from me to accuse the current government of insanity, but I have to wonder how rational it is to expect an appeal to succeed.

Caspar Bowden

Caspar died today. Three simple words that encompass so much.

Twitter has, understandably and gratifyingly, lit up with messages of condolence, admiration and regret – and so it should. The privacy advocacy community has lost one of its most capable members, and feels that loss keenly. But we are all affected by the issues on which Caspar fought: the regulation and availability of cryptography, the interception of communications, the governance of surveillance, the equal application of human rights. There isn’t a person on the planet whose interests are unaffected by these issues, and we are worse off for losing Caspar’s efforts on our behalf.

Let me give two three little snapshots from my memories of Caspar.

The first is from a meeting of the Enterprise Privacy Group, in about 2007. Two or three of us had just arrived and were clustered near the coffee table, talking about ethics and data protection. Caspar arrived, and was pouring himself a cup of coffee a few feet away from our group. He carried it over to us and, as he arrived, said

“Well, I favour a Rawlsian model – because unless you can point to a basis in justice, none of the other approaches can work anyway.”

It wasn’t said as a put-down or out of intellectual snobbery; he had simply grasped the essentials of our conversation within the time it took to pour a cup of coffee, and was contributing his conclusion. Of course, the rest of us had to sneak off later and Google “Rawls on Justice” to refresh our memories <cough>, but the truth of it is, in the work I’ve done on ethical data-handling in the last couple of years, I have come back, time and again, to the principles Caspar set out in that conversation some 8 years ago.

There was mischief in Caspar too, and my second snapshot is from Berlin, where Caspar and I were among the invited participants in an Article 29 Working Group meeting. As ever, a group beer was proposed for the evening, and we all piled into taxis to get to an otherwise unremarkable Bierhof somewhere outside the city. I found it hard to believe that we couldn’t have found somewhere just as good in town, and grumbled something to Caspar along those lines. He, of course, knew exactly where we were, and said “Yes – but there’s a certain irony, isn’t there, in drinking beer with a group of European Data Protection Supervisors, right next to the Wannsee”.

The third is from QCon 2014, where Caspar gave a talk on “Mistrusting Trust”. The talk was excellent, and you can replay it here… but my abiding memory wasn’t that; it was that about 4 minutes in (as you’ll see if you watch the video), Caspar’s laptop reset, and threatened to melt down. It was overheating partly because it was a bit old, but partly because Caspar was running Qubes on it, so as to be able to boot a virtual OS in order to show his slides. Now, lots of privacy advocates love the idea of only using virtual machines, and killing them in between sessions of browsing, email and whatever… but an awful lot of us can’t be bothered to go to the inconvenience, when it comes down to it. Caspar could – even if it almost reduced his laptop to a pool of smoking black plastic in front of an audience.

To say that people sacrifice privacy for convenience is such a cliché most of us don’t even reflect on it any more. But Caspar’s example should remind us that too many of us sacrifice more than just privacy for the sake of a little convenience. Caspar had always done his homework. How many of us hold forth on the laws governing data protection, interception, surveillance and so on without actually having read them all the way through? I know I’m guilty on that score – but Caspar had read them – and not just for one country, but for the UK, and the US, and the EU, and France, and so on. How many of us read them and then forget the detail, or don’t bother to really think through the implications, and publish our analysis, and fight for how we think it should be?

And of course it wasn’t just the law. As his comment on Rawls indicates, Caspar was scarily well read in all kinds of areas, and he had a level of recall which many policymakers had occasion to find embarrassing.

Yes, he could be abrupt, and yes, he often ‘bent’ convention by asking direct and probing questions in ways that risked alienating the policymakers he sought to influence. But I never saw him do so rudely, inappropriately, or in a way that demonstrated anything less than total integrity. That took strong moral principles, intellectual rigour, and courage.

Caspar – thank you for your dedication; we’re worse off without you, and we’ll miss you. And I wish I’d said that to you before you died.

Speaking of trust…

I received a letter recently from (apparently) NHS England. In it, they require me to confirm my contact details to them, or face being removed from the patient list of my local GP (whose patient I have been for the last 30 years… in fact, his surgery has changed address more frequently than I have in that period).

In the letter, NHS England note that GP practices need accurate contact details in order to be able to contact me to arrange appointments and vaccinations, report test results, arrange the details of long-term care for chronic illness, or enable a hospital to write to me. Those are all true – but all those needs can be satisfied by my confirming my address details to the practice itself. There is nothing in their letter which convinces me to give my personal data to a third party – except their threat to have me de-listed if I fail to do so. “If you do not respond within four weeks from the date of this letter then we will assume that you have moved away”, they say. Which is odd, since I had a GP appointment a couple of months ago.

When I mentioned this on Twitter, the responses were interesting. Some people simply assumed that the letter was a phishing attempt, and advised me to ignore it… which is an interesting trust problem for the NHS to consider. Others suggested that there is a second purpose for NHS England to collect the data, which is to do with the way in which they fund GP practices. If that were the case, you might expect it to appear among their stated purposes for collecting the personal data, in the letter on the basis of which you are expected to confirm your details. It does not – though if you visit the NHS England Shared Business Services website and go to their FAQ, you will find it mentioned. Again, if you go to their website, you will find some explanation of the role of Shared Busines Services in relation to GP surgeries… but “Shared Business Services” appears nowhere on the letter, which mentions only “NHS England”. To all appearances, this is a third party organisation asking for my personal data.

Taking a hard-hearted financial view, you might say that it’s in their financial interest (and therefore, indirectly, in mine) to ensure that they aren’t paying GPs for patients who don’t exist. But there are some flaws in that argument:

  • I’ve seen my GP within the last 3 months. It’s therefore unlikely that I have moved out of the area. There must be other patients whose records indicate that they are more likely to have gone away; if NHS England are going to incur the cost of writing to anyone, shouldn’t they prioritise apparently “dormant” patients over recent, “active” ones?
  • In the short term, NHS England might indeed save some money by finding a pretext (i.e. my non-response) to remove me from the paid-for headcount at my GP’s surgery. However, since I am a genuine, current patient, if I were de-listed I would have to apply to be re-listed, which would incur extra, unnecessary administrative cost.

But, economics aside, isn’t there a law against collecting data for one purpose, and using it for something else? I rather thought that was what the Data Protection Act was for… but then again, the NHS’ recent history with regard to data protection is tarnished, to say the least. For instance, the HSCIC recently admitted that it simply ignored patients’ requests to opt-out from the care.data scheme, which they had designed as “opted-in by default” in the first place. What’s more, it emerges that care.data cannot distinguish between patients wishing to opt out of having their data shared with third parties, and patients not wishing to opt out of services such a referrals or e-prescribing.

So let’s recap the various failure modes illustrated by these two instances:

  • Designing a system as “opted-in by default”, despite the fact that it involves data-sharing with commercial and public-sector third parties, and that it processes sensitive personal information;
  • Failing to act on opt-out requests;
  • Offering different types of opt-out (good), but then failing to process them accurately (bad);
  • Dubious prioritisation of administrative effort;
  • Lack of transparency about purpose of collection;
  • Threat of de-listing for non-compliance.

Just take a look at those, and ask if they would be acceptable in clinical/surgical practice, as opposed to health service administration.

It seems to me that we have failures, here, of design, technical implementation, governance, transparency and consent – and all of those erode patients’ trust. Public sector bodies, world wide, are desperate to reap the benefits of digital infrastructure, and rightly so: it’s in our interest as citizens that our public sector services should be efficient, cost-effective and technically up to date. But data controllers must show that they are safe and worthy custodians of citizens’ personal data, and that requires a far more mature approach to transparency, informed consent, and genuine opt-out. Stampeding headlong in pursuit of the potential financial benefits of innovation, while ignoring the concomitant responsibilities, is irresponsible and immature.

The NHS is in its 60s; if we are to trust it to exploit technology effectively, in our interest and on our behalf, it needs to grow up.

What is trust?

This question either comes up explicitly in discussions of privacy and identity, or lurks unspoken beneath the surface. We talk a lot about online trust, and only occasionally does someone stop and ask if we’ve defined just what it is that we’re talking about.

Some people reply, at that point, that trying to define trust is a rat-hole/red herring/exercise in yak-shaving, but I am less pessimistic. Here’s a simple definition of trust which I think is flexible enough to serve as a basis for productive discussion, while still covering the essentials. Of course, if you disagree, I’d welcome comments!

“Trust is the belief that someone else will act in your interests, even if they have the opportunity and the motivation to do otherwise.”

It’s a belief, and like any belief, it may be well- or ill-founded. You may be mistaken or misinformed, or the other party may be deceiving you.

As I say, it’s not necessarily a perfect definition, but it has worked for me so far.

Any thoughts?

Can a machine care about privacy?

[I originally wrote this post for the Internet Society blog, here. However, I thought it worth re-publishing here, since the two blogs probably have different (if overlapping) readerships. I have made a couple of minor tweaks  – indicated by [*] – and expanded the concluding couple of paragraphs, because the loyal readers of this blog deserve a bit of added value ;^) ]

I recently attended the Digital Enlightenment Forum 2015 in Kilkenny; not your average tech conference, and not the average discussion topics, either – but topics of continued and growing relevance.

For me, the value was in having a conference that offers the time – even if only for a day – to step back and look at the bigger picture within which all our urgent-seeming daily task-work takes place.

One theme in particular stood out, and it’s also a major strand of the [*]Internet Society’s Trust and Identity work plan over the coming couple of years. Several sessions, including two breakout groups, addressed the theme of digital ethics. The discussion was wide-ranging, sometimes detailed and often abstract, but fascinating and – ultimately, entirely practical. There will be a full paper on the theme in due course, but here’s a hint of some of the ground we covered.

[Warning: may contain philosophy…]

I hope that warning hasn’t immediately discouraged you. “Don’t panic!”, as Douglas Adams would have said. I have found a really simple model for discussing complex topics like this when you have a very diverse group of people round the table; almost all the discussion tends to fall into one of four categories:

  • Philosophy/principles
  • Strategy/society
  • Implementation/practicalities
  • Technology

Once you know that, it’s much easier to avoid getting mired in the intricacies of any one of the four categories, and that keeps the discussion relevant to everyone.

So, philosophy:

Taking our cue from one of the morning’s presentations, we went right back to fundamentals: what have thinkers said about ethics in the pre-digital past? There’s the “continental” philosophical approach of people like Foucault and Barthes, who cast ethics as a series of narratives and structural relationships; then there’s the more “traditional” analytic approach, looking at systems of ethics based on consequences, rules and justice. What they have in common is a recognition that ethics is contextual, and a function of the society in which it evolves.

In our case, we’re in the midst of a post-industrial, technically-oriented society. It’s sometimes hard to imagine that things could be any other way… but what happens if you subtract technology from the ethical equation? You’re left with information (rather than data), decisions, relationships, and semantics. Technology may change a lot of things, but it doesn’t remove those fundamentals, and it doesn’t alter the contextual nature of ethics, so we can be reassured that we have some solid principles to build on.

What’s happening at the social level?

Here, the main point I picked up was about “agency”. In our technologically-oriented society, almost every action we are able take (our “agency”) is mediated – either through technology, such as computers, phones etc., or through third parties, such as banks, the retail supply chain, telcos, internet service providers, identity providers and so on. Ethically, the fact that what we do is mediated often moves us further from the consequences of our decisions and actions. This can leave us feeling that we’re not really responsible for what might happen. As one participant put it:

“Technically mediated phenomena are outstripping human-centric’ ideas of privacy and ethical outcomes.”

In the context of our discussion at the time, that was a perfectly normal and rational conclusion to draw. When you stop and think about it, it could be quite a scary one, too.

But so what… why should I care?

Practicalities:

Well, we should care because all those third parties through whom we act are making decisions, every day, which directly affect us. Sometimes they do so with our knowledge and consent, but on the Internet, that is far from the norm, as I suspect we all acknowledge. Here are some examples of the kinds of decision which are made on your behalf all the time:

  • “This privacy policy and these cookies are fine for you; there’s no need to ask you explicitly if you’re OK with them.”
  • “We’ll opt you in to our data-sharing policy by default. If you don’t like it, you can always tell us later.”
  • “Your personal data is safe with us, because we anonymise it. You don’t need to worry.”
  • “Collecting this data does compromise your privacy here and now, yes… but we expect there to be a collective benefit in the long run, so that’s OK.”
  • “We’re “personalising” our prices for you, based on the really expensive laptop you’re using. But don’t worry – we have your best interests at heart.”

These are all real, practical consequences of our technically-mediated society, and they affect your privacy every day.

Technology:

So what’s the technical dimension? Again, what struck me was “agency”. The number and diversity of ethical agents we encounter is growing fast, and… not all of them are human. A lot of decisions these days are made by algorithms (remember those stock market volatilities caused by too many automated trading systems all reacting to each other?), and any algorithm that makes decisions is not ethically neutral. “Ah,” I hear you say, “but they only do what they’re programmed to do. Someone. somewhere is responsible… not the algorithm”.

OK – let’s look at that for a moment. First, some algorithms are adaptive; there are already network security products, for instance, that learn, over time, what constitutes “normal” behaviour, and adjust their own behaviour accordingly. Then there’s machine learning in its broader sense. Researchers into artificial intelligence already report that the algorithms they create frequently go on to evolve in often unexpected ways, to exceed human capabilities [*] and, crucially, to adopt entirely different strategies for learning and decision-making, from the strategies adopted by humans.

And last: machines are increasingly capable of autonomy – self-driving cars are a good example. They will react to changing conditions, and deal with circumstances they have never encountered before, without human intervention. The first time a driverless vehicle runs someone over, we’ll see where the ethical buck stops.

Conclusions:

This has been a lightning gallop through several hours of discussion. What did we conclude?

  • First, that modern life raises just as many ethical issues as it ever did.
  • Second, that if we’re not careful, all the ethical calculations get made on our behalf – and not always in our best interest.
  • Third, that if we’re to retain our agency, we need to understand that that’s what we’re trying to do, and why.
  • Fourth, that there are indeed some constants here, despite the pace of change around us. Ethics is a social, contextual thing, and it has to do with meaning, relationships and decisions. Those are very human things.

And last, that we have entered the age where a growing number of ethical agents are non-human, and we have to understand how that affects us and our societies. Is there a fundamental ethical principle based on ‘global’ human values? Might that principle be to do with consciousness, or autonomy, for example? And if so, what’s the ethical status of machines that are increasingly autonomous and might even, at some point, be described as conscious?

[*] Machines are already ethical agents. Our lives are constantly affected by automated decision-making, in increasingly intimate ways. Algorithmic decision-making underpins many things that we take for granted, and modern life would certainly be very different if algorithmic decision-making were suddenly to be removed overnight. But how much of this automation is directed towards the protection of privacy? In my experience, not much. I can think of examples, where traffic management systems obfuscate part of the number-plate data they collect, but conversely, ANPR systems intentionally do exactly the opposite. I’ve seen a prototype of a CCTV system that obscures individuals’ faces – but again, that’s definitely the exception to the rule. And think how hard we continue to have to fight, to have airport body-scanners that display a diagram instead of an actual body image, and that discard the images as soon as they have been evaluated.

To my mind, privacy protection is still treated as a bolt-on, after the fact, and only if people complain long and loud. The more we delegate decision-making to machines, the less acceptable that is as a design philosophy. Privacy by design is, I would argue, never more important than when the ethical agent is non-human.

We aren’t necessarily at the point where a machine can care about privacy… but we should certainly keep asking the question.