It’s hard not to heave a heavy sigh as I sit down to draft this post. Here’s a snippet of Twitter text which prompted me to do so: “let’s start thinking about a robust, counterfeit-resistant, privacy-preserving mechanism to prove immunity to [COVID-19]”. It’s from Alex Stamos, former CISO of Facebook and someone who really ought to know better.
Let’s start at the end: if we’re talking about trustworthy assertions, here, there is no technical solution that will prove immunity. This is not new: I used to be guilty of a similar fallacy when I was an enthusiastic advocate of digital signature technology to provide services which we over-optimistically labelled “proof of integrity”, “proof or origin” and even “proof of execution”. Over time, I was gradually and kindly led to understand, by those wiser than I was, the importance of distinguishing between proof and evidence, and I re-calibrated my expectations of the technology (downwards).
The best we can hope for is an assertion that offers some evidence of immunity. It will be up to the relying party to decide how reliable that evidence is, and whether or not it constitutes proof. That decision will be based on a lot more than the technical mechanisms underlying the assertion.
Now let’s deal with the beginning: I am all for designing robust, counterfeit-resistant, privacy-preserving mechanisms for trustworthy attribute assertion. I’ve spent years, shoulder to shoulder with others, shoving that boulder up that hill. What I learned was that the outcomes of such a design are frequently not ones you anticipated, and that they depend on a great many factors that are outside the control of the designer. That alone ought to give us pause for thought, when the use case is as life-threatening as the one we’re considering here.
And so to the squishy bit in the middle… why is it that the outcomes of an apparently straighforward technical solution are so often unexpected and sometimes unwelcome?
A lot of it has to do with perverse incentives, especially those that differ between this case and other, previous applications of similar technical solutions. For instance, forging a driving licence does not, in itself, necessarily put me at risk of losing my life. Forging a COVID-19 immunity certificate so that I can go back to work might do just that. And yet, the economic incentive might make me think it’s worth the risk.
Years ago, I returned to the UK from an area in which diseases like malaria, typhoid and yellow fever were a risk. I used to have a vaccination certificate held into my passport with a rubber band – except that this time it was missing. I wasn’t allowed to enter the UK until I’d been vaccinated, there and then in the airport, because I’d lost my credential.
Now, supposing a vaccine becomes available for COVID-19, but at first only in limited quantities. Those people who have been vaccinated will, of course, want a credential to assert that fact, and those people who haven’t will have a strong incentive to get hold of one. But, of course, we will have designed a flawless registration and issuing process… so unvaccinated people won’t be able to get a valid credential.
This could create a perverse incentive to get yourself vaccinated, by whatever means. So, unvaccinated people will flout the rules in the hope of getting arrested and pre-emptively vaccinated (after all, the police won’t have capacity to confine hordes of unvaccinated offenders safely, will they?). But presumably we will then want a *second* trustworthy credential which says “yes, this person has been vaccinated, but they still aren’t allowed to work, because they’re being punished for flouting the rules in the first place”. So is this solution about public safety, or prison capacity?
It didn’t take much for a credential-issuing scheme which seemed so benevolent to turn into something far more sinister, and at a pretty weird angle to the original direction of travel.
If the techbros want to play around with trustworthy credential schemes, fine – but for goodness’ sake don’t let them think that what they come up with on their own is likely to be complete, safe or even desirable. The answer might be to get a load of other stakeholders round the table to contribute to the design of the system as a whole… but then again, it could also be that those stakeholders are far better employed working on non-technical counter-measures which might turn out to be more important.