Can a machine care about privacy?

[I originally wrote this post for the Internet Society blog, here. However, I thought it worth re-publishing here, since the two blogs probably have different (if overlapping) readerships. I have made a couple of minor tweaks  – indicated by [*] – and expanded the concluding couple of paragraphs, because the loyal readers of this blog deserve a bit of added value ;^) ]

I recently attended the Digital Enlightenment Forum 2015 in Kilkenny; not your average tech conference, and not the average discussion topics, either – but topics of continued and growing relevance.

For me, the value was in having a conference that offers the time – even if only for a day – to step back and look at the bigger picture within which all our urgent-seeming daily task-work takes place.

One theme in particular stood out, and it’s also a major strand of the [*]Internet Society’s Trust and Identity work plan over the coming couple of years. Several sessions, including two breakout groups, addressed the theme of digital ethics. The discussion was wide-ranging, sometimes detailed and often abstract, but fascinating and – ultimately, entirely practical. There will be a full paper on the theme in due course, but here’s a hint of some of the ground we covered.

[Warning: may contain philosophy…]

I hope that warning hasn’t immediately discouraged you. “Don’t panic!”, as Douglas Adams would have said. I have found a really simple model for discussing complex topics like this when you have a very diverse group of people round the table; almost all the discussion tends to fall into one of four categories:

  • Philosophy/principles
  • Strategy/society
  • Implementation/practicalities
  • Technology

Once you know that, it’s much easier to avoid getting mired in the intricacies of any one of the four categories, and that keeps the discussion relevant to everyone.

So, philosophy:

Taking our cue from one of the morning’s presentations, we went right back to fundamentals: what have thinkers said about ethics in the pre-digital past? There’s the “continental” philosophical approach of people like Foucault and Barthes, who cast ethics as a series of narratives and structural relationships; then there’s the more “traditional” analytic approach, looking at systems of ethics based on consequences, rules and justice. What they have in common is a recognition that ethics is contextual, and a function of the society in which it evolves.

In our case, we’re in the midst of a post-industrial, technically-oriented society. It’s sometimes hard to imagine that things could be any other way… but what happens if you subtract technology from the ethical equation? You’re left with information (rather than data), decisions, relationships, and semantics. Technology may change a lot of things, but it doesn’t remove those fundamentals, and it doesn’t alter the contextual nature of ethics, so we can be reassured that we have some solid principles to build on.

What’s happening at the social level?

Here, the main point I picked up was about “agency”. In our technologically-oriented society, almost every action we are able take (our “agency”) is mediated – either through technology, such as computers, phones etc., or through third parties, such as banks, the retail supply chain, telcos, internet service providers, identity providers and so on. Ethically, the fact that what we do is mediated often moves us further from the consequences of our decisions and actions. This can leave us feeling that we’re not really responsible for what might happen. As one participant put it:

“Technically mediated phenomena are outstripping human-centric’ ideas of privacy and ethical outcomes.”

In the context of our discussion at the time, that was a perfectly normal and rational conclusion to draw. When you stop and think about it, it could be quite a scary one, too.

But so what… why should I care?

Practicalities:

Well, we should care because all those third parties through whom we act are making decisions, every day, which directly affect us. Sometimes they do so with our knowledge and consent, but on the Internet, that is far from the norm, as I suspect we all acknowledge. Here are some examples of the kinds of decision which are made on your behalf all the time:

  • “This privacy policy and these cookies are fine for you; there’s no need to ask you explicitly if you’re OK with them.”
  • “We’ll opt you in to our data-sharing policy by default. If you don’t like it, you can always tell us later.”
  • “Your personal data is safe with us, because we anonymise it. You don’t need to worry.”
  • “Collecting this data does compromise your privacy here and now, yes… but we expect there to be a collective benefit in the long run, so that’s OK.”
  • “We’re “personalising” our prices for you, based on the really expensive laptop you’re using. But don’t worry – we have your best interests at heart.”

These are all real, practical consequences of our technically-mediated society, and they affect your privacy every day.

Technology:

So what’s the technical dimension? Again, what struck me was “agency”. The number and diversity of ethical agents we encounter is growing fast, and… not all of them are human. A lot of decisions these days are made by algorithms (remember those stock market volatilities caused by too many automated trading systems all reacting to each other?), and any algorithm that makes decisions is not ethically neutral. “Ah,” I hear you say, “but they only do what they’re programmed to do. Someone. somewhere is responsible… not the algorithm”.

OK – let’s look at that for a moment. First, some algorithms are adaptive; there are already network security products, for instance, that learn, over time, what constitutes “normal” behaviour, and adjust their own behaviour accordingly. Then there’s machine learning in its broader sense. Researchers into artificial intelligence already report that the algorithms they create frequently go on to evolve in often unexpected ways, to exceed human capabilities [*] and, crucially, to adopt entirely different strategies for learning and decision-making, from the strategies adopted by humans.

And last: machines are increasingly capable of autonomy – self-driving cars are a good example. They will react to changing conditions, and deal with circumstances they have never encountered before, without human intervention. The first time a driverless vehicle runs someone over, we’ll see where the ethical buck stops.

Conclusions:

This has been a lightning gallop through several hours of discussion. What did we conclude?

  • First, that modern life raises just as many ethical issues as it ever did.
  • Second, that if we’re not careful, all the ethical calculations get made on our behalf – and not always in our best interest.
  • Third, that if we’re to retain our agency, we need to understand that that’s what we’re trying to do, and why.
  • Fourth, that there are indeed some constants here, despite the pace of change around us. Ethics is a social, contextual thing, and it has to do with meaning, relationships and decisions. Those are very human things.

And last, that we have entered the age where a growing number of ethical agents are non-human, and we have to understand how that affects us and our societies. Is there a fundamental ethical principle based on ‘global’ human values? Might that principle be to do with consciousness, or autonomy, for example? And if so, what’s the ethical status of machines that are increasingly autonomous and might even, at some point, be described as conscious?

[*] Machines are already ethical agents. Our lives are constantly affected by automated decision-making, in increasingly intimate ways. Algorithmic decision-making underpins many things that we take for granted, and modern life would certainly be very different if algorithmic decision-making were suddenly to be removed overnight. But how much of this automation is directed towards the protection of privacy? In my experience, not much. I can think of examples, where traffic management systems obfuscate part of the number-plate data they collect, but conversely, ANPR systems intentionally do exactly the opposite. I’ve seen a prototype of a CCTV system that obscures individuals’ faces – but again, that’s definitely the exception to the rule. And think how hard we continue to have to fight, to have airport body-scanners that display a diagram instead of an actual body image, and that discard the images as soon as they have been evaluated.

To my mind, privacy protection is still treated as a bolt-on, after the fact, and only if people complain long and loud. The more we delegate decision-making to machines, the less acceptable that is as a design philosophy. Privacy by design is, I would argue, never more important than when the ethical agent is non-human.

We aren’t necessarily at the point where a machine can care about privacy… but we should certainly keep asking the question.

 

Advertisements