IGF2012 session: Governing Identity on the Internet

I got the chance to take part today in a workshop session at the Internet Governance Forum in Baku, and as, for once, I had made some written notes, I thought I’d get a little more mileage out of them by posting a summary here… I hope this is useful. Comments welcome, as ever.

A 1, 2, 3 of digital identity

Having listened to the very diverse views and interpretations of identity here at the IGF this week, my worry is that we’re talking about governing something that we haven’t clearly defined. So here’s a perspective on digital identity, under three headings:

  1. One evolutionary sequence: how did we get here?
  2. Two models of what digital identity is…
  3. Three issues

1. Evolution

In the 80s, your ‘identity’ meant either your passport, or – if you were one of the few who used a computer – your account on a mainframe or (higher education) server. Siloed and incomprehensible to other systems or organisations.

In the early 2000s, it started to make sense to talk about your ‘network identity’; the collection of things that a panoptical third party could know about you, by looking at all the places where information about you was stored online (IDs, accounts, user profiles, etc.).

By the middle of that decade, federated identity was a reality, at least among large enterprises. A non-siloed digital credential that could be used to identify you to an organisation that had not issued it to you.

The current goal could be described as “Internet-scale” federation: a framework which can cater for many kinds of credential, understandable by many organisations, in different sectors, for different purposes, with different models for trust and liability. This is the aim of programs like the US National Strategy for Trusted Identities in Cyber-space and a similar initiative in the UK, for example.

In short: the goal is a digital ‘identity’ as multi-faceted and versatile as our real-life, individual identity as a person. That’s a long way from where we were 30 years ago – and we’re by no means there yet.

2. So let me describe two ways of looking at digital identity. I’ll describe the first one and then contrast its characteristics with the second. The first, I’ll call the Classic model. It is based on:

– Single authoritative source

– Credential

– Authentication

– Binary (Y or N)

– Level of assurance and a chain of trust, both of which can be formalised into procedures and assigned liability models (retroactive).

The second is what I’ll call the Emerging model. It looks like this:

– Multiple, low-assurance sources

– Attributes

– Authorisation

– Contextual and adaptive

– A web of trust, notions of mutable reputation, and quantifiable mainly in terms of risk management (predictive).

3. So, what issues does that present us with?

The Classic model is fundamentally retrospective. It’s the historical way of thinking about identity, it establishes an identity relationship between what’s happening now and a trusted event in the past, and liability is – basically – the arrangement for what you do after something has gone wrong.

As a result, one problem is that it copes badly with cases where an identity was issued for one purpose and is later used for other purposes – but you can’t stop  that from happening.

The Emerging model is future-facing. It is much more dynamic, and it is also completely compatible with anonymous authorisation. But it alters our conception of identity and trust, and relies on immature disciplines such as reputation management and contextual authorisation.

It is a model whose working parts are almost entirely hidden from the end user – where the Classic model at least (usually) requires the user’s involvement at the point of authentication. The Emerging model poses real questions of user control and consent.

And lastly, there’s a catch. This isn’t an either/or decision. We need both the classic and the emerging models – because neither, on its own, can get your digital identity close to being a reflection of your personal identity.


3 thoughts on “IGF2012 session: Governing Identity on the Internet

  1. Doc Searls says:

    The reason “your digital identity” is not “close to being a reflection of your personal identity” is that you are a “user” on the Web and not a sovereign and independent human being.

    The reason you are a user and not a human being on the Web is that in 1995 we settled on a model called “client-server” in which every server carried responsibility for authentication and pretty much everything else. You, as an individual, were just a user. It is not a coincidence that only two industries call individual human beings “users.” The other is drugs.

    Nothing substantive has yet been built toward independence for individuals on the client side. We remain dependent variables rather than independent ones — a situation that has not changed in the seventeen years since. Client-server has become calf-cow, where users are the calves and sites are the cows. (More here: http://hvrd.me/yliVSX)

    Both the classic and the emergent models you describe rely on cows. Neither allows the user to perform as an independent individual. Neither attempts to fix the problem of identity from the individual’s side.

    Truly fixing identity is un-done work. Some companies and development efforts listed in the ProjectVRM wiki (http://bit.ly/KNZE40) are working on it. Every six months it also comes up at Internet Identity Workshops (http://www.internetidentityworkshop.com/). But it’s a hard problem, akin to solving personal transportation with better railroads.

    What we need online are the digital equivalents of cars and bicycles: personal transportation. Remember the “information superhighway” — this communications path on which you would “drive”? The idea was that each browser was a personal vehicle on which we “surfed” from place to place. Think of the literal meanings of drive, browse and surf. They are what independent human beings do. When all we do is “use,” we are dependent. Simple as that.

    This is why the browser morphed from a car or a surfboard into a shopping cart that gets re-skinned with every commercial site it “uses.” At each site the user iis known in ways exclusive to the site, over which the individual has little control, except to opt out of the site and its systems. Add Twitter or Facebook login to the mix, and you just have more, and bigger, cows involved.

    The burden of subordination to each of us is hundreds of different login/password combinations and acceptance of one-sided “agreements” offered by each site or service we use, on a take-it-or-leave-it basis. The “agreements” are ones we never read because they are written by and for lawyers, and are built to offload as much risk and liability as possible to users, along with minimized control over the user’s “experience.”

    So there is much more to fix here than identity alone. But identity is the oldest challenge, and perhaps still the largest one.

  2. Thanks Robin for this summary of your comments at the workshop. Your conceptualization of two models is indeed very helpful for thinking about identity from technological and governance perspectives. However, I do want to raise one point. You say the Emerging model is completely compatible with anonymous authorisation. True, but with a important qualifier. A deanonymizing party can be implemented in the Emerging model to support anonymous authorization and accountability, and can, under certain conditions, link an action to the identity behind it. (Birell and Schneider 2012) It is the governance regime, and the policies it sets, which determines the conditions under which the Emerging model allows for anonymity, pseudonymity, etc. This is what makes the governance discussion around identity so critical.

  3. Fred Fisher says:

    I agree with Doc that the Client/Server model has created a bias toward definition 1 of identity, but we do not have to give up strong assurances and user control to have distributed identity:

    In the system we have built each individual is represented by an online entity (Agent) that only he or she can access directly. This Agent can create secure and highly anonymous connections with other Agents. I can then use authorities (client/server types) to validate claims I make to another Agent without providing any additional information. For example, my lawyer can validate that I am over 18 to another Agent. As long as that Agent is willing to trust my lawyer, then any future communications from my Agent can be assumed to be coming from someone over 18. The Bar association may validate my lawyer. Yet another Agent may provide assurances that I maintain good security practices and only I have access to my Agent. In this manner a Web of Provenance can be assembled that provides stronger assurances than the client/server model. There is a more complete description here:

    This technology is real and will soon be in production. We are looking for quality engagement and would welcome your involvement and input.

Comments are closed.