Kantara P3WG and Levels of Assurance

As you may know, I’ve recently set up the Privacy and Public Policy Work Group (P3WG) for the Kantara Initiative, and as we start mapping out the areas in which the Group wants to exercise an influence, one topic has generated more discussion than anything else on the mailing list. It goes by the rather uninformative name of “LOA”, or Level of Assurance. Even if you’ve never heard of LOAs, they have played a major part in your life online and off.

I’ve blogged before about what I call the “Chain of Trust” – namely, the sequence of events all of which need to be working if a credential is to work properly when you present it. In other words, for instance, if you apply for a passport in the name of Michael Mouse and the passport office doesn’t bother to check whether there’s any evidence that that is your name, the resulting passport won’t be that reliable as an indicator of your identity (even though people may assume that it is). Similarly, driving licences would not be much use as an indicator of which vehicles you’re entiteld to drive, if it was possible for you to alter what the licence says… and if you tell someone the PIN of your ATM card, it is no longer effective as a way to ensure that only you can take money out of your account (in fact, the bank is likely to take it as de facto evidence that you must have been responsible for the transaction, even if it wasn’t you who actually used the card and PIN…).

These are just three examples of the many ways in which the Chain of Trust can fail, at the Registration/Verification phase, over the life of the credential, and at the authentication step, respectively. There are many other points at which the Chain can be compromised and the reliability of the credential (or the assertions made using it) undermined.

LOA is about protecting the first of these – the point at which someone decides whether or not to issue a credential which represents you in some way. In other words, if you can present a relying party with not just a credential, but a ‘score’ which indicates how reliably that credential was issued to you, can judge whether it’s more likely that you are actually Michael Mouse, or that whoever gave you a passport saying so was not doing their job very well.

That, in turn, will give them useful information about what decisions to make next, particularly if they decide that the answer to your authentication question is “yes”.

The UK and US governments both have relatively simple 4-level LOA models (though, inconveniently, one runs from 0-3 and the other from 1-4…). Omitting the ‘index value’ for a moment, the four levels look remarkably similar. In fact, if I adopt a slightly different scale, just to paper over that difference, we might get something like this:


UK: no authentication of identity

US: little or no confidence in the asserted identity

Medium rare

UK: basic authentication

US: some confidence in the asserted identity


UK: greater level of assurance (e.g. credentials based on proof of identity to a third party)

US: high confidence in the asserted identity

Well done

UK: identification beyond reasonable doubt

US: very high confidence in the asserted identity

So far so good. However, when it comes to putting this simple model into practice, and because we’re talking about assurance here (and therefore judgement), a couple of different approaches emerge.

One is to give a technical specification of the kinds of authentication technology which should or must correspond to an implementation claiming to be at a given LOA level.

Another is to relate the LOA levels to levels of risk, and allow the implementer to work out how they think that risk is best mitigated.

You might think that a third, better solution would be to combine the two… define organisational risks in a way which allows them to be assessed against the four-level model, and then have a technical specification list which says: “if you face this level of risk and you want this level of assurance, you need technology such as ‘x’, implemented with the following governance measures.

Actually, I have a better idea… if you have opinions on this question (better still, if you have a good answer), come and sign up to the Kantara P3WG and join the discussion. We’d love to hear from you.

Proving that ID Cards can’t be cracked

Thanks to @cheshire_puss for the pointer to this ZDNet article about Home Office plans to “engage with the industry to show that we have a ‘gold standard’ card which cannot be changed, modified or cloned”.

On one level, I’m delighted to have an opportunity, at last, to use the word “epistemological” in a blog post (who wouldn’t be…?). Because, on the face of it, the Home Office plans look like a doomed attempt at that epistemological impossibility, the proof of a negative proposition. Industry experts could help the Home Office show an ID card being cracked, could show that it’s possible but difficult, or could show a card successfully resisting a finite number of attempts to crack it… but they can’t demonstrate that the card cannot (ever) be changed, modified or cloned.

On another level, I’m puzzled as to what’s in it for a couple of the stakeholders, should these experiments go ahead. It seems to me that the industry experts are being invited to endorse the security of something which they will then neither implement nor rely on. In other words, the success or failure of the ID Cards they have certified as “gold standard” will depend on factors entirely outside their control.

If they are to bear no liability for this (and let’s face it, why should they), then what is gained by having them ‘initial’ the tests? If they are to be expected to bear some liability for the eventual outcomes of ID Card issue and use, I look forward to seeing what kind of industry experts step forward. Brave fellows, all.

And what’s in it for the citizen-stakeholder? Assuming that the tests fail to prove the negative proposition, will citizens trust the technology more, or will they simply question whatever liability model on which the cards are rolled out?

Lastly, I’m also bemused by the Home Office’s reported explanation of why it doesn’t want to see whether or not Adam Laurie’s claimed attack is genuine: they do not wish to be “overwhelmed by individuals wishing to demonstrate ID card cracks.” Do they think the cards are so insecure that every Trent, Bob and Alice is queuing up to have a go? Or that there are enough nutters out there to mount some kind of Denial of Service attack with a series of trivial attempts? (“Hullo children – and today on Blue Peter, we’ll be showing you how to make your own Home Office ID Card reader, using just this egg carton, some sticky-backed plastic and a roll of tinfoil”).

Seriously, though – why do the Home Office say they are looking for a suitable way to engage with industry to demonstrate that ID cards are secure? I thought CESG had a whole programme to do just that, and that the “E” in CLEF stood for “Evaluation”…

But perhaps I’m very old-fashioned.

"… down to the ball game"

A belated word of thanks to Rich Goodwin of PinnaclePoint technologies, who had the brilliant idea of rounding off Burton Catalyst ’09 with an outing to the Friday night baseball game between the San Diego Padres and the Milwaukee Brewers. And a fabulous evening it was, too. The home crowd were happy, because their team recovered from a dismal few innings to take the match 11-7, and I was happy because it’s the first live baseball game I’ve been to, and I had Rich on hand to explain the finer points.

About halfway through the game, a foul ball went into the front row of the stands directly between us and third base, and into the left hand of a spectator… who was on his cellphone at the time. Cool as a cucumber. However, I think even he would have to acknowledge the superior coolness of an Arizona Diamondbacks fan who, the previous evening, managed this one… In the left hand, the baseball; in the right hand, the offspring and the giant soda. Nice.

Home Office riposte on ID Card hack

Those of you with any interest in cricket will know that today is the first day of the 4th Test Match between Australia and England for the Ashes. With the series standing at 1-0 to England (2 matches having ended in a draw), the 4th Test (out of 5) could be the clincher. Not that I’m a cricket buff in any way – but it’s a good excuse to get a couple of those bewildering sports analogies into the blog post. (See bottom of post for approximate baseball translations…)

The Home Office appeared to have been bowled a bit of a googly [1] yesterday, when it was reported that Adam Laurie had not only hacked the access controls on an ID Card chip, but had successfully copied the data onto another chip, modified an existing field and added new data in another. However, this piece on the Kable site reports that the Home Office played a straight bat [2], denying outright that there was any evidence of a successful or viable attack.

According to the spokesperson:

“This story is rubbish. We are satisfied the personal data on the chip cannot be changed or modified and there is no evidence this has happened,” said a spokesperson.

“The identity card includes a number of design and security features that are extremely difficult to replicate. Furthermore, the card readers we will deploy will undertake chip authentication checks that the card produced will not pass. We remain confident that the identity card is one of the most secure of its kind, fully meeting rigorous international standards.”

What’s not quite clear is whether the phrase “personal data on the chip” has again been carefully chosen to allow for the possibility that personal data, once off the chip, could be modified successfully.

As for the comments about authentication checks between the card, the chip and the reader: I remember studying a similar design exercise when I was working with the IBM 4753 device family in the early ’90s. The 4753 was a smart card reader with an encrypting PIN pad; it included the option to connect to a 4755 cryptographic adapter (PC card), and also to have a biometric pen attached to it to produce a ‘digitised signature’. The pen incorporated three sensors (one for pressure, and one each for the two dimensions of movement across the page), which it used to generate a digital ‘map’ of your signature and thence a cryptographic hash of the resulting data. The ratio of false accepts/rejects to correct accepts/rejects was pretty impressive, and seemed consistent whether you ‘enrolled’ with your signature or with some other pass-phrase. Unfortunately it was all a bit pricey.

The other feature of the system was that each of the devices in a setup (the card reader, the crypto adapter and the smart card) was able to establish a pairwise, DES-encrypted session with each of the others.

This meant that the session keys had to form part of a standard DES key hierarchy (session/data keys, key-exchange keys, and master keys). The role of the master key in this hierarchy is to encrypt/decrypt the key-exchange keys. Good practice says that your master key should be unique to each hardware device, and should never leave a protective hardware key-storage module, or KSM. (Bear with me… this is going somewhere relevant…)

In the PC adapter and the card reader, that KSM was about the size of a pack of cards, had a long-life battery back-up and several hardware protective mechanisms to prevent physical attempts to extract the keys. My favourite was the low-temperature sensor; it had been observed that, if you cool a memory chip sufficiently and then slice away at it with a microtome (thing used for preparing stuff you want to put under an electron microscope… makes very thin slices…), you could reveal the physical record of ones and zeroes and, in principle, recover the keys (a bit like reading the pattern of pits on the surface of a CD through a microscope). The low temperature sensor was there so that, if the KSM thought someone might be trying this, it would wipe the keys from memory.

The point is that in the corresponding smart card format, the size constraints meant that it was impractical to apply several of these physical security measures – such as the temperature sensors or the battery backup. Lack of the latter meant that instead of being stored in volatile RAM, the smart card keys were written to EEPROM so that they could persist in the card.

The adapter/reader KSMs also had a Faraday shield to prevent attempts to ‘eavesdrop’ on the module while it was at work. Obviously, that’s not very practical in the smart card implementation, though, if you want to use contactless communication between the card and a reader.

The bottom line is that, at least back then, the security of the key-store smart card depended to a great extent on the fact that it was very small, and was physically sandwiched between other parts of the chip. It was still more vulnerable to physical attack than its larger siblings, and such attacks were demonstrated by Ross Anderson and his students at the Cambridge University Computer Laboratory. (Incidentally, these physical attacks – and much more – are described in Prof Anderson’s 600-page book on Security Engineering, freely available online here, which is a belter of a read if you’re at all interested in this sort of thing).

The point is that whatever authentication protocols the smart card and reader undertake, the security of that communication is very likely to depend, ultimately, on the physical security of the smart card – and that imposes design constraints which can be extremely hard to overcome, especially if you want a card which is affordable at population scales of deployment.

Adam Laurie’s current attack may or may not be fatal in principle, and may or may not be viable in practice. It’s impossible to tell, from the level of information in the public domain – but by the same token, it is also impossible to conclude, from that information, whether or not these ID card chips genuinely increase the security and integrity of the bearer’s data.

All in all, a very sticky wicket [3].

[1] googly : a ball which appears to be heading in one direction, but instead breaks the other way. Rough translation – a pitch which starts out looking like a Sinker, but turns into a Cutter (remember that in cricket the ball can hit the ground before reaching the batsman… which gives an opportunity for an abrupt change of direction).

[2] play a straight bat : to maintain a resolute defence, often by playing a ‘blocking shot’ – though offensive strokes can also be played with a straight bat. ‘Keeping a straight bat’ is a general principle which relates to the wisdom of keeping your bat well aligned with the (vertical) stumps it is used to defend. No direct equivalent in baseball, because in cricket the batsman has the option of hitting the ball and not running… but technically, the closest equivalent might be a bunt.

[3] sticky wicket : an unpredictable or difficult playing surface – hence, unpredictable or difficult circumstances. Again, no direct equivalent, because it refers to the area the ball bounces off before reaching the batsman.

PS – at the time of writing, England are all out for a paltry 102 runs, while Australia have scored 79 for the loss of just one wicket. Not looking good for England.

The relentless march of progress

  • March 2006 – UK introduces RFID-enabled, ICAO-compliant ‘e-passports’;
  • March 2007 – Adam Laurie demonstrates ability to unlock e-passport chip data for ‘read’ access;
  • August 2008 – Jeroen van Beek demonstrates ability to clone e-passport chip and implant bogus images;
  • August 2009 – Same techniques applied to clone UK ID card and modify its data.

Technological progress being what it is, we can already see – over the 3 years since their introduction – the erosion of some of the security features of the RFID implementation: for instance, in response to the August 2008 attack, the Home Office responded that

“it had yet to see evidence of someone being able to manipulate data in an e-passport. A spokesman said: “No one has yet been able to demonstrate that they are able to modify, change or alter data within the chip. If any data were to be changed, modified or altered it would be immediately obvious to the electronic reader.”

Note the careful phrasing there: “data in an e-passport”. What the attacks have demonstrated is that you can read the information off a chip, write it to another chip, and modify that version in such a way that it fools the standard UN/ICAO “Golden Reader” software. These two pages give more details and are a useful counter-balance to the “e-passports cracked, nation doomed” headlines:

  • Q&A about Jeroen van Beek’s hack, from 2008;
  • Register article on “how to clone an e-passport”, from Aug 4th 2006 (yes, 3 years ago last Tuesday!)

So, should we be surprised at this sequence of hacks? In one sense, no: essentially, all it illustrates is one of a set of basic principles about credentials. The diagram below shows how these attacks fit into that set of principles: in this instance, the ‘weak link’ comes when an authenticating party relies exclusively on the RFID chip to establish the connection between the credential and the person presenting it.

This diagram is just the latest embodiment of something I’ve been using since about 2005 to illustrate what I call the “chain of trust”. That is: the purpose of a credential is to provide some level of proof that the person presenting it now ‘is identical with’ the person to whom it was issued. This is a narrow but very useful definition of the term ‘identity’. What level of proof the credential can provide depends on the strength of several factors over the lifetime of the credential (and, indeed, its bearer).

In the current sequence of hacks, what is being tested is the integrity of the credential as a whole (can bogus data be successfully encapsulated in a credential which appears genuine?), and the robustness of the authentication step (does it rely solely on the credential, or does it also involve comparison with an ‘authoritative’ repository?).

The Home Office, IPS and ICAO have all pointed out that the attacks fail to overcome some of the safeguards built into the system as a whole. For instance, ICAO note that the passport hack would be revealed by a check against their PKD database; the UK authorities point out that a cloned ID card with the user’s details modified will fail a check against the National Identity Register (assuming that that repository still contains the details of the user to whom the card was originally issued). Those defences are all true – but they do not prove that the implementation of these RFID chips is secure as a whole. They show that it is secure in certain use cases – for instance, when the card is not used as a stand-alone authentication mechanism, but is used in conjunction with online access to other components of the system (such as the PKD or the National Identity Register) – and that checks against those components are, in turn, secure. The also show that in some entirely realistic use-cases – for instance, where an online check against the NIR or deployment of full-function card readers would be prohibitively expensive – the level of proof the credentials can deliver is substantially reduced.

Again, the answer to the question ‘should this surprise us?’ is probably ‘no’. On the other hand, let’s not forget that successive proponents of the ID card scheme have given a hostage to fortune in the form of the phrase “the gold standard of identity”. Some of them have even referred to commercial organisations “queueing up to rely on it as proof of identity”. It is one thing to proclaim this as a political aspiration; it is, as the hacks have demonstrated since the chips’ introduction, quite another to translate that into a comprehensive implementation which delivers the same ‘gold standard’ to all relying parties.

Can the National Identity Scheme be operated safely?

Several people I’ve spoken to recently have remarked that real-time social media like Twitter seem to reduce the frequency with which they blog… and I suspect it’s the same for me. It’s partly because Twitter soaks up time, and partly because it also soaks up some of those spur-of-the-moment ideas and comments which otherwise might have developed into fully-fledged postings. However, looked at the right way, I guess that might also signal a flight to quality rather than quantity of blog posts. Here’s hoping…

But I digress – or whatever a digression is called when it comes at the beginning, rather than part way through.

I’ve just got back from last week’s Burton Catalyst conference in San Diego – an excellent event, by the way, and congratulations to the Burton Group analysts who did such a good job of adding value, both through their own subject-matter expertise and by making introductions and connections so constructively between attendees. Over lunch, I got into a discussion with one of the analysts about the UK National Identity Scheme (NIS), whether or not it was a good idea, and whether or not there are reliable grounds for opposing it. As ever, discussing UK policy while abroad gave a great opportunity to look at it from a different perspective.

The view he expressed was, essentially, that there isn’t a good reason to oppose ID Cards on the basis of their use for e-government service delivery – the benefit of reliable authentication for joined-up government is worth having; however, there’s a risk involved if you suspect that the government lacks the competence to run such a scheme securely, and that risk might outweigh the potential benefit.

There were two other points which we noted and then moved on:

  • first, that there are those who feel the National Identity Scheme is currently unaffordable;
  • second, that cancelling the ‘small, visible, individual plastic card’ component of the system does nothing to mitigate the risk of operating the large, invisible, mass-scale repositories’ component of the system.

So, what of the question of competence? Well, the picture revealed by ComputerWeekly‘s FoI requests is not entirely reassuring. They list a number of breaches involving inappropriate insider access to records in the CIS (Customer Information System) database, one of the three major repositories in the Scheme. On the one hand, some breaches are indeed being discovered and those responsible are being disciplined (including dismissal). A DWP spokesman is quoted as saying that “the small number of incidents shows that the CIS security system is working”.

On the other hand, the article questions whether all breaches are actually being noticed (and/or reported), and suggests that many were only discovered after sample checks, rather than through alerts being triggered.

There’s also the issue of how many people have, or will have, access to the data held in the NIS. Currently it stands at about 200,000 civil servants, across 480 local government bodies and a number of central government departments. That figure will increase as data-sharing between the CIS and other departments such as the DVLA (Driver and Vehicle Licensing Agency) is put in place. Interestingly, a case study on the DWP’s own website gives this description of the DVLA’s ‘purpose of use’ for access to the CIS:

“to confirm receipt of higher rate mobility component of Disability Living Allowance for entitlement to exemption of vehicle licensing duty”

That’s really quite specific. Indeed, it might lead one to wonder whether that purpose makes it proportionate to expose the CIS’ 92,000,000 records to the DVLA user population. It’s not easy to find out the size of that population, but according to the DVLA’s annual report for 2007-2008 there were about 6,500 people on their payroll (this does not necessarily include those employed as part of ‘contracted-out services’, a separate item in the accounts).

The stated purpose also makes it legitimate to wonder what safeguards are in place to ensure that the data are not accessed for other purposes. The DVLA itself does not have an especially happy history where data sharing is concerned. After it reported £6.3m of income from selling motorists’ information to third parties, the government drafted new rules on acceptable use and sharing.

Returning, then, to the question of competence to run the National Identity Scheme securely: the DWP says it’s doing a good job of keeping the CIS secure, despite a small number of identified insider breaches; but the CIS is only one of three major repositories in the Scheme, each owned by a different department. All three of them need protecting if the whole is to be meaningfully secure. Then there’s the issue of securing access by ‘user’ departments such as the DVLA: the difficulty of doing that grows with each department added, and the growth is almost certainly exponential rather than linear.