COVID-19 and trustworthy immunity assertions

It’s hard not to heave a heavy sigh as I sit down to draft this post. Here’s a snippet of Twitter text which prompted me to do so: “let’s start thinking about a robust, counterfeit-resistant, privacy-preserving mechanism to prove immunity to [COVID-19]”. It’s from Alex Stamos, former CISO of Facebook and someone who really ought to know better.

Let’s start at the end: if we’re talking about trustworthy assertions, here, there is no technical solution that will prove immunity. This is not new: I used to be guilty of a similar fallacy when I was an enthusiastic advocate of digital signature technology to provide services which we over-optimistically labelled “proof of integrity”, “proof or origin” and even “proof of execution”. Over time, I was gradually and kindly led to understand, by those wiser than I was, the importance of distinguishing between proof and evidence, and I re-calibrated my expectations of the technology (downwards).

The best we can hope for is an assertion that offers some evidence of immunity. It will be up to the relying party to decide how reliable that evidence is, and whether or not it constitutes proof. That decision will be based on a lot more than the technical mechanisms underlying the assertion.

Now let’s deal with the beginning: I am all for designing robust, counterfeit-resistant, privacy-preserving mechanisms for trustworthy attribute assertion. I’ve spent years, shoulder to shoulder with others, shoving that boulder up that hill. What I learned was that the outcomes of such a design are frequently not ones you anticipated, and that they depend on a great many factors that are outside the control of the designer. That alone ought to give us pause for thought, when the use case is as life-threatening as the one we’re considering here.

And so to the squishy bit in the middle… why is it that the outcomes of an apparently straighforward technical solution are so often unexpected and sometimes unwelcome?

A lot of it has to do with perverse incentives, especially those that differ between this case and other, previous applications of similar technical solutions. For instance, forging a driving licence does not, in itself, necessarily put me at risk of losing my life. Forging a COVID-19 immunity certificate so that I can go back to work might do just that. And yet, the economic incentive might make me think it’s worth the risk.

Years ago, I returned to the UK from an area in which diseases like malaria, typhoid and yellow fever were a risk. I used to have a vaccination certificate held into my passport with a rubber band – except that this time it was missing. I wasn’t allowed to enter the UK until I’d been vaccinated, there and then in the airport, because I’d lost my credential.

Now, supposing a vaccine becomes available for COVID-19, but at first only in limited quantities. Those people who have been vaccinated will, of course, want a credential to assert that fact, and those people who haven’t will have a strong incentive to get hold of one. But,  of course, we will have  designed a flawless registration and issuing process… so unvaccinated people won’t be able to get a valid credential.

This could create a perverse incentive to get yourself vaccinated, by whatever means. So, unvaccinated people will flout the rules in the hope of getting arrested and pre-emptively vaccinated (after all, the police won’t have capacity to confine hordes of unvaccinated offenders safely, will they?). But presumably we will then want a *second* trustworthy credential which says “yes, this person has been vaccinated, but they still aren’t allowed to work, because they’re being punished for flouting the rules in the first place”. So is this solution about public safety, or prison capacity?

It didn’t take much for a credential-issuing scheme which seemed so benevolent to turn into something far more sinister, and at a pretty weird angle to the original direction of travel.

If the techbros want to play around with trustworthy credential schemes, fine – but for goodness’ sake don’t let them think that what they come up with on their own is likely to be complete, safe or even desirable. The answer might be to get a load of other stakeholders round the table to contribute to the design of the system as a whole… but then again, it could also be that those stakeholders are far better employed working on non-technical counter-measures which might turn out to be more important.

 

What did we learn?

Today strikes me as a good day to consider what we can learn from our experiences since the 2016 referendum on UK membership of the EU. Here are some of the dangers we might either continue to live with, or decide to do something about:

  • A referendum treated as a casual question. A referendum can be a dangerous thing. It shouldn’t be promised lightly, or initiated without a great deal of thought – and the role and responsibility of an incumbent government in a referendum campaign needs clarity and safeguards. Above all, the deciding margins and legal status need to be explicit.
  • A Prime Minister who makes policy by stealth, without consultation, consensus or transparency.
  • Power without accountability, in the hands of groups like the ERG, IEA and others.
  • Ministers who use their position to set events in train, and then walk away from the resulting damage.
  • Demagogues who whip up division and resentment on the basis of lies for which they bear no political accountability.
  • An opposition party too willing to let the government’s mistakes unroll, regardless of the consequences, rather than having the courage to challenge them in parliament.
  • Dirty money in the political system.
  • A parliamentary system still based on convention and “gentlemen’s agreements”, but where gentlemanly* behaviour is the exception (*and I mean that in a platonic, gender-neutral sense).
  • Media ranging from lazy and uncritical to xenophobic and mendacious, too happy to profit from a narrative that the UK’s problems are all Someone Else’s Fault, and too willing to let spurious claims go unchallenged.
  • A regulatory system ill-equipped to cope with the large-scale, systematic abuse of social media and personal data, domestically and across borders.

It’s not a full list, by any means, but when I look back over it, one thing stands out: Leave voters don’t feature in it. Those factors are the cause of Leave votes, not the consequence of Leave votes.

Our government has distinguished itself (in the worst possible way) by perpetuating a toxic polarisation of public opinion, and by failing to take a single step to reconcile or unite the public behind a workable vision. It has also dishonestly ridden the myth that the EU is the root of all the UK’s ills. If there is a way forward from where we are today, it must start there; it must move us towards an honest examination of the causes of the UK’s social, economic and political woes, and it must lead to policies that reflect a national consensus.

I make no assumptions about where, on the political spectrum, that vision would come from, because this is not a matter of party politics – it’s a matter of leadership, integrity and open, honest, accountable public debate.

We have never had such a clear lesson in the factors that prevent such a debate from taking place. They are right in front of us, staring us in the face. They have been flushed out into the open for all to see, and what they tell us is that most of the current dysfunction we see is not the fault of voters – whether Leavers or Remainers.   But if we fail to act on that lesson, we will have no-one to blame but ourselves.

The big lie of the Brexit Bill

Theresa May has, belatedly, agreed that Parliament will have an opportunity to vote to accept the eventual Brexit deal that David Davis manages to strike with the Commission (assuming he does indeed manage to strike a deal, which is still entirely uncertain).

However, as you may have inferred from my careful phrasing, the proposal appears to impose a strict limit on Parliament’s options. May’s position appears to be that, if Parliament votes to reject that deal, the government will make no attempt to negotiate an alternative, but will simply allow the Article 50 countdown to expire, ejecting the UK from the EU by default. We should object to this approach on several grounds:

  1. In simply declining even to seek an alternative deal, the government is abdicating its responsibility to protect the electorate’s interests to the best of its ability;
  2. In threatening not to seek an alternative, the government is choosing to present parliament with an arbitrarily constrained choice which amounts to blackmail;
  3. In leaving open the possibility that the vote might not even be held until it is too late, practically, to negotiate an alternative, the government is showing the same procrastination and dilatory mismanagement which has characterised its time-wasting approach since the referendum. That’s not a sane way to justify leaving the EU.

There is also a democratic deficit, here, in offering Parliament the choice between “bad deal” and “no deal”, when those are not the only two viable or rational options. Clearly, the choice should include “no Brexit”… so why isn’t this being offered?

Part of the issue goes back to the referendum itself, which, as a House of Commons research report amply explained, was not binding on the government. Here’s part of the text (my emphasis).

“This Bill requires a referendum to be held on the question of the UK’s continued membership of the European Union (EU) before the end of 2017. It does not contain any requirement for the UK Government to implement the results of the referendum, nor set a time limit by which a vote to leave the EU should be implemented. Instead, this is a type of referendum known as pre-legislative or consultative, which enables the electorate to voice an opinion which then influences the Government in its policy decisions. The referendums held in Scotland, Wales and Northern Ireland in 1997 and 1998 are examples of this type, where opinion was tested before legislation was introduced. The UK does not have constitutional provisions which would require the results of a referendum to be implemented, unlike, for example, the Republic of Ireland, where the circumstances in which a binding referendum should be held are set out in its constitution.”

“Ah”, say the pro-Leave lobby, “but David Cameron said he would implement the result of the referendum, so we must leave”. Oh, well, if a politician said something in the run-up to a vote, that’s a cast-iron constitutional guarantee, isn’t it?

No. To give David Cameron’s promise equal weight to the constitutional force of the referendum is a gross misinterpretation. It fails to distinguish between a political commitment and a legal or constitutional one. Cameron simply had no authority to behave as though the referendum result was something it was not – particularly in view of the fact that he bailed out rather than take responsibility for his actions.

So, legally and constitutionally, the government was perfectly entitled to regard the referendum result as a piece of input which could influence, but need not over-ride, what it thought it should do in the interests of the country, given all the policy options at its disposal (of which, of course, burning bridges with the EU was only one).

For the government to treat the referendum as its sole and binding option was misguided. To deny Parliament a full and genuine choice between “bad deal” and “no Brexit” is not just misguided, it is undemocratic. “Leave” campaigners made a lot of noise about parliamentary sovereignty – let’s see them put their democratic money where their demagogue mouths are.

The big lie of immigration control.

Against her secretive inclination, details of Theresa May’s immigration policy proposals leaked into the public domain today via, among others, the Guardian newspaper. Attempting to justify her intention to end EU citizens’ freedom of movement immediately upon Brexit, she said the move was needed in order to ease the strain such migration could place on the delivery of public services.

I took very careful note of the word “could”, there. She conspicuously did not say ‘the strain such migration does place on the delivery of public services’. Based on some correspondence I had with my MP recently, I have a pretty good idea why. I wrote to my MP, who also happens to be a doctor, to ask if he had any figures for the cost to the NHS of treating non-entitled patients in his constituency. After all, as a doctor and former PPS to a Health Secretary, I imagined he’d want to have a grasp of that kind of factor in the formation of health policy.

He didn’t have the figures, but forwarded my letter to the current health secretary, Jeremy Hunt, whose office replied that the Department of Health didn’t have such data either. They suggested that I ask the NHS Trust that covers my MP’s constituency. Obviously, given his initial response, he hasn’t asked them, and equally obviously, neither has the health secretary or his department.

In other words, the simple answer to the question “how does the government know the extent to which EU migration places a strain on the delivery of public health services?” is: they don’t. They don’t have the figures, and haven’t asked the bodies that do. Their policy clearly isn’t based on evidence, and everything I’ve seen suggests it’s based on nothing more than instinctive xenophobia and the worst kind of demagoguery. I’m ashamed to share their nationality.

 

“… to ensure this can never happen again”

After the latest preventable disaster, there is a grim inevitability in the countdown to the point where we hear a government statement like this: “We must take whatever steps are needed to ensure that this can never happen again”? And yet it, or something like it, frequently does happen again.

I suggest that part of the reason is the convention, among politicians, that disastrous events must not be “politicised”. I can understand a couple of the usual reasons for that approach. First, for the victims of any given mishap, their relations, and the survivors, the sight of politicians blaming and finger-pointing is neither edifying nor helpful. It is of no practical use in addressing the aftermath. Second, partisan parliamentary bickering is a form of escalation, and it increases the pressure on the government to respond by legislating in a hurry, and that is usually a bad thing. To abuse the cliché: “enact in haste, repent at leisure”.

These are two potential benefits of the conventional approach. But I think it’s also possible that this approach has harmful consequences which can outweigh the potential benefits, and that those consequences include having the same, or similar disasters recur.

“De-politicising” a disaster tends to rule out examination of political (or policy) factors as a possible cause. In the absence of that examination, what does the eye fall on? I have been thinking of it in terms of a set of concentric “circles of blame”. In the case of the Grenfell Tower fire, the innermost circle seems to contain the manufacturers, vendors and builders who, respectively, made, sold and used flammable cladding which made the buildings more dangerous through increased risk of fire.

The next concentric circle encompasses the building code enforcement bodies, fire safety inspectors, and building management services responsible for checking that the appropriate safety measures were in place and were viable.

In the next circle we find those responsible specifically for this tower as a social housing asset. Here, in this case, we’re talking about the local authority’s housing, maintenance and (should the worst happen) emergency response functions.

Then we come to the first circle that, rather than product or subject-matter experts, consists of more generalist policy-makers whose role is to make the work of the inner rings possible, effective, and safe. This circle is where the practicalities of building construction and maintenance have to rub up against the policy on social housing, and where spending on social housing has to compete with spending on social services, schools, road maintenance, street cleaning, rubbish collection and so forth.

In all of these rings, money is a factor.

It’s the outermost ring, though, which has the greatest effect on all the others, because this is the policy ring. It’s where legislation, regulation and budget are all, ultimately, decided. This outermost ring influences the others precisely because they all form part of a system. Actions in the outermost ring affect stakeholders in all the others. And yet, it’s off limits, because discussing the disaster in the contest of this ring would mean “politicising” it.

“This must never happen again” therefore ends up meaning “we must rule out the causes, but only in the inner circles”. So, hypothetically, we might end up with a regulatory change that banishes combustible cladding from buildings across the country, but doesn’t apply similar fire prevention measures to, say, insulation material, or electrical wiring. We might get legislation that places new duties on management companies of social housing, or on the inspection teams of local authorities, but makes the assumption that those entities will re-allocate existing funds to meet their new obligations. A series of “point” solutions, targeted at the inner rings of our concentric model, but failing to consider whether anything in the outer rings must also change.

In fact, over recent years we have seen local authorities placed under greater and greater statutory obligations, and simultaneously starved of the funding to put them into practice. But, just suppose the gap between local authorities’ funding and the cost of their statutory obligations happened to be a contributing factor in the Grenfell Tower fire. If the outer ring of our concentric model cannot be considered, as a possible source of contributiry factors, we cannot end up with a systemic solution. If we insist on treating disasters like Grenfell Tower as point problems, we will be doomed to a series of point solutions, and a series of further, preventable disasters. But heaven forbid we should “politicise” the issue.

What use is a mutable blockchain?

Over on the Consult Hyperion blog, Dave Birch has written a characteristically lucid and engaging piece about hyperbole around the idea of the mutable blockchain:

http://www.chyp.com/mutable-and-immutable-blockchains/

One of the use cases Dave cites (not his, I hasten to add) is the use of mutable blockchains to implement the so-called “right to be forgotten” (RTBF) –  or “droit à l’oubli”, as I should perhaps call it while I am still allowed to. That prompted two thoughts which I felt deserved a blog post.

First, a quick swipe at RTBF, a label which has caused more trouble than it deserves, given the merits of the underlying principle. The Google v Spain ruling interpreted RTBF as a requirement for search engines to “de-list” search results that linked Mr Consteja Gonzales, by name, to data about one aspect of his past. The ruling also does not affect search results outside the EU.

That’s a very qualified constraint on people’s ability to find out about what happened. If you search for “Spanish guy bankrupt Google”, you should get the details faster than you can say Streisand Effect. So, as a “right to be forgotten”, this seems somewhat flimsy. And yet, it is the basis of a robust legal judgment – so what did the judges and lawmakers really intend?

One thing the Google v Spain ruling definitely doesn’t try and do is stamp out all the original instances of the data in question: one of the characteristics of the Internet is the ease and speed with which new copies of data can be published and disseminated globally. In that sense, the Internet has made such publication and dissemination almost entirely frictionless. However, readers still need to get to the information in order to read it  —  and, of course, it follows from the above that there is an ever-increasing mass of information out there to search through.

Seen from that perspective, the Spanish court’s qualified constraints on access to data are best explained as a re-introduction of just some of the friction which the Internet as a whole, and search engines in particular, have removed. RTBF is really “the right to have some information made slightly more inconvenient to retrieve”. Which is so catchy, I can’t really understand why “the right to be forgotten” ever caught on in the first place.

All that said, what I think this shows is that the technical “fix” (redacting the results of some online searches) is a rather clumsy and only partially effective way to achieve the desired social result, which is that the individual’s reputation should not be inappropriately sullied by inaccurate or irrelevant data which happens to be easy to retrieve.

Clumsy or not, I can’t see any sensible way of applying blockchain technology to this problem that makes it any better. In fact, the idea that your Internet search results are based on a cumulatively-signed consensus among, say, the major search engines and the libel courts is mind-boggling, to put it mildly.

Now, on to my second thought.

When I’ve talked about identity and privacy over the past decade or so, I have noted that they are a function of social interaction. Almost exactly three years ago, Vint Cerf observed that he thought privacy was probably an anomaly. I disagreed, and set out some of the reasons why in a blog post which, I think, remains relevant. I don’t think an expectation of privacy is an anomaly, because I don’t think social interaction is an anomaly.

However, to recap briefly from that post: social interaction has some characteristics which it is proving hard to replicate in our technically-mediated online lives. If you live and work in a small village, you might have less expectation of privacy, but since people have to get along with each other in the long term, past indiscretions might be forgiven and forgotten, especially if the individual concerned demonstrates remorse and better behaviour.

Over time, in other words, people develop a reputation, based on one’s past experience of them, the narratives constructed by others, information in the public domain, and so on. And this, I think, is where we come to the point of intersection with the example that Dave Birch cited (and rightly dismissed), about using a mutable blockchain to implement the “right to be forgotten”.

First, I absolutely agree with Dave’s argument that, in the ledger use-case, the way to deal with an incorrect ledger entry is to leave it exactly as it is, and append a corresponding correcting entry when the error is discovered. That way, you balance the books.

But what does “balancing the books” mean, if the blockchain is being used, not for an ledger of accounts, but to record information that contributes (positively or negatively) to an individual’s reputation? What is the right way to correct an entry that is recognised as being wrong? Let’s make it a bit less abstract.

Suppose that the blockchain in question is a record of someone’s ratings as a Seller on an auction site. Most of them are 100% positive, but then there’s one which is dreadful:

“Terrible service; goods arrived late, I was wrongly charged, and the product fell apart. I will never buy from this seller again, and neither should you. 0/5”

Then it turns out that this review was actually meant for another seller.

What’s the right way to make a correction? Is it to go back and delete the entry, or to leave it in place but ensure that it can only be viewed in conjunction with a full retraction and an explanation that it was a review of someone else?

Either way, what do you do about the Seller’s cumulative reputation score? In the ledger example, a correcting entry balances the books – but in this case, a simple correcting entry of 5/5 can’t restore the Seller’s perfect record of 100% satisfaction scores, and 10/5 isn’t a realistic option.

So, the accounting ledger isn’t a useful design template in this case. We’re not looking for a technical solution that balances the books, we’re trying to manage the effect on someone’s reputation of the data that is recorded about them.

Like trust, reputation is something which it’s hard to accrue and easy to forfeit. There’s an asymmetry there, which explains why the “balancing” entry to a reputation-damaging assertion cannot simply be a statement of the opposite.

Is the answer, then, to delete the original entry? Well, that might work in the hypothetical I’ve constructed (where the original entry was simply mistaken); but suppose the original entry was true, and the seller not only rectified the error, but did it so graciously that the customer was delighted. Deleting the truthful original entry, in that case, seems wrong – but neither do we want to leave the possibility that it might be seen and taken as definitive. Is the correct action to ensure that the original review can only be viewed in tandem with updates that explain the subsequent outcome? Here, a “balancing” entry might be part of the answer, but doesn’t seem to be enough on its own.

In other words, just as in the RTBF case, we are trying to replicate several nuanced features of social interaction (reputation, forgiveness, restitution…) using clumsy technical tools which simply don’t fit.

Blockchain might be the best possible technology for implementing crypto-currencies, but be a lousy way to try and build a reputation management system. Blockchain may be a perfectly good hammer, but I wish its fanatics would stop trying to re-cast every online trust problem as a nail.

 

 

The Leave vote: doomed to division

Two weeks on from the referendum on EU membership, consultancy CEB scanned 25,000 sources world wide to collect data about job vacancies on offer in the UK. They concluded that compared with pre-referendum figures, the number of jobs on offer had declined by 47%, from almost 1.5 million to around 820,000. They described this as “far outside the normal fluctuations seen by the company, which tend to be between 5 and 10 per cent”.

Of course, it’s still too early to say whether this is simply a short-term “blip” or whether there will be a significant long-term reduction in jobs on offer. One thing is fairly certain: this is not the zero-sum game that the Leave campaign would have people believe. Removing the 2 million or so EU citizens currently employed in the UK will not simply shift 2 million unemployed natives into work. Taking 2 million employees out of the system is far more likely to reduce economic activity over all, slowing growth and making it harder for companies to invest in increased capacity.

Leavers hope that Britain will become a newly vibrant, dynamic economy – perhaps based on expectations of new companies springing up in the absence of all that EU red tape. But those companies will need a market, and the UK’s departure from the EU will make market access more difficult and more expensive.

In the longer term, barring EU citizens from coming to work in the UK would aggravate Britain’s demographic problems: an aging population without sufficient young, tax-paying people in the workforce to sustain the pension system and welfare state.

And there we have it. The Conservative Brexit strategy, if put into practice, is one which would lead, predictably, to bad consequences for anyone stuck in Britain without the means to leave, or to pay for their own healthcare and retirement. That’s a demographic from which the leading Leave campaigners are conspicuously absent. They blithely encouraged others to vote Leave, knowing that they themselves would not be caught in the resulting trap.

From across the Channel, the French publication Libération can see that the Brexit movement tries to combine two incompatible groups: one it describes as “driven by a narrow, xenophobic nationalism; the other more liberal or libertarian than nationalist, and in no way hostile to immigration, with Daniel Hannan MEP as its principal protagonist”*.

The problem for those now faced with the task of delivering on the Leave campaign’s promises is this: there is simply no way to reconcile the wishes of those two groups, either economically or politically.

Economically, as Daniel Hannan was obliged to concede after the referendum, the UK cannot enjoy continued access to the single market if it insists on barring EU workers.

Politically, as Matthew Parris has so eloquently put it, “anti-immigrant feeling won it for Leave, and they know it. They used it, rode it and are complicit in it.” Hannan’s “informed, liberal, immigration-friendly” perspective will do nothing for that constituency but enrage it.

The Leave proponents (those who haven’t simply shrugged and walked away, that is) now find themselves between the proverbial rock and hard place. They have made promises they can’t keep, to constituencies they can’t simultaneously satisfy. If that’s what they count as a victory, I’d hate to think what they would consider a defeat.

*”L’un est animé par un nationalisme borné et xénophobe. L’autre, dont Daniel Hannan est le principal représentant, est informé, plus libéral ou libertarien que nationaliste, et nullement hostile à l’immigration.” — Liberation, 5/7/2016

The EU Referendum and Article 50

The “No” vote in the UK’s recent referendum on membership of the European Union has, in itself, plunged the UK into political, economic and social turmoil – even though the referendum was purely advisory (not legally binding on the government), and even though no political decision on a British exit has been taken, much less an exit effected in practice.

A lot has been written in analysis of the constitutional problems raised by the vote and its political fall-out. The clearest example to date is probably the piece by David Pannick QC in the Times: http://www.thetimes.co.uk/article/why-giving-notice-of-withdrawal-from-the-eu-requires-act-of-parliament-dz7s85dmw

Even that piece, though, is quite detailed and involved. I had to read it several times before I could state it simply enough to believe that I had understood its logic myself – so I thought the resulting summary might be helpful to others. Here it is:

1. Invoking Art.50 commits the UK, irreversibly, to a process which requires the amendment of existing UK statutes.

2. Existing UK statutes can only be amended by Act of Parliament*.

3. Therefore, Art.50 cannot lawfully be invoked without an Act of Parliament (and consequently a parliamentary vote).

Conclusion: it would be unlawful for the Prime Minister to invoke Art.50 without putting that decision to the vote in parliament.

 

* And crucially, the PM may not revoke (“frustrate”)  a prior parliamentary decision by invoking the “royal prerogative”.

The UK and EU research


This week I am in Brussels to review a project part funded by the EU’s Horizon 2020 programme. In line with Horizon 2020’s criteria, this project is large scale (it will specify infrastructure for academic and research bodies across the continent) and pan-national (it’s about integrating institutions’ login systems with their counterparts across the EU and beyond). 

The UK’s academic login federation is part of the project – as you would expect. UK students need federated access to other institutions’ digital resources, as foreign students need access to ours. Large scale scientific research, too, depends on the ability for resources to be securely shared regardless of their geographic location.

In future, as a non-EU country, the UK risks finding itself more or less excluded from projects like this one. At best, it will be allowed to participate (like Switzerland or the USA) but at a higher cost: UK participation will be fully at UK expense, with no EU grant funding. At worst, the inclusion of UK members in consortium proposals will simply make those proposals less likely to survive what is already a tough evaluation process (I know – I’ve been an evaluator and a rapporteur in that process). 

But the damage doesn’t stop there. 

“Fine,” you might say, “if we can’t join EU-funded projects, we’ll just do without”. But think, for instance, of the academic login project I’m looking at this week. Even if the UK is not part of that consortium, it still needs to federate with non-UK institutions. It will have to achieve interoperability and compliance anyway, but it will have to do so without having influenced the architecture, and without the EU grant funding from which others will benefit. That sounds to me like a lose-lose, and a recipe for lost competitiveness.

But that’s not all.

Look at it from the perspective of a company which wants to stay at the forefront of large-scale research. If you’re based in Britain, you exclude yourself from subsidised participation in EU collaborative research, and thereby from the whole community of potential partners who, themselves, are benefiting from EU grant funding.

If you have the option, you’d do better to shift your operations to an EU member state, and reap the benefits of a collaborative framework, 27 countries’-worth of capable partners, and grant funding into the bargain. The UK, as a base for large-scale collaborative research, will go into a vicious cycle of reduced funding and increased cost.

The project of this week’s review, large-scale though it is, is just one microscopic part of the picture. For example, I have had similar involvement with EU-level projects dealing with e-identity, emergency response in the financial services sector, biometric authentication at borders, privacy and personal data management, ‘big data’ and financial services, cross-border sharing of research infrastructure, and privacy of medical research data, and so on. And of course, those projects too are a tiny fraction of the scope of EU research funding as a whole.

Nor is this just a matter of academic research: exactly the same logic applies, for instance, to compatibility with EU data protection and privacy laws. We cannot survive by ignoring them, and we have just thrown away the option of influencing the development of laws with which we will, in any case, need to comply.  

In that context, for politicians to claim that the UK can benefit by “taking back control” is delusion on a colossal scale. The UK insulates itself from its European counterparts to its own detriment; it can only thrive by collaborating and interoperating with them. The future we face now is of having to interoperate with systems and laws we didn’t help draft, on a scale we can’t afford alone, and of paying full price for the privilege.

Bravo, Eurosceptics. Your bumper-sticker politics represents a giant step towards a second-rate future. This is now the bumper sticker for our country:

“If you thought education was expensive, just wait until you’ve tried ignorance.”

Is the IP Bill holed below the water line?

The big privacy and policy story of the day in the UK is the publication of the Intelligence and Security Committee’s report on the draft Investigatory Powers Bill – which is currently being pushed through an abbreviated parliamentary process.

The Bill’s authors get a rough ride from the committee. This is from the ISC Chairman’s covering press release:

“Taken as a whole, the draft Bill fails to deliver the clarity that is so badly needed in this area. The issues under consideration are undoubtedly complex, however it has been evident that even those working on the legislation have not always been clear as to what the provisions are intended to achieve. The draft Bill appears to have suffered from a lack of sufficient time and preparation.”

Here’s the ISC’s site with a link to the report itself:

http://isc.independent.gov.uk/news-archive/9february2016

And here are a couple of good, short pieces of analysis from reliable tech/policy commenters:

Ian Dunt (politics.co.uk)

http://www.politics.co.uk/blogs/2016/02/09/bad-day-for-britain-s-spies-as-intelligence-watchdog-grows-t

Glyn Moody (arstechnica.co.uk)

http://arstechnica.co.uk/tech-policy/2016/02/previously-tame-uk-parliament-watchdog-rips-into-new-snoopers-charter/

The Bill is criticised in almost every respect:

  • it doesn’t achieve its stated goal of bringing all the interception powers into a single statutory instrument;
  • it fails to bring clarity to the purpose and goals of the policy it embodies;
  • it does not include operational justifications, without which parliament cannot decide on its adequacy – and will not include them until after it is expected to be passed into law;
  • its provisions for privacy protection are piecemeal and unclear;
  • the safe guards applied to use of comunications data are “inconsistent and largely incomprehensible”.

In other words, it has been drafted in haste, by people some of whom don’t know – or can’t articulate – what it is supposed to do. As a result, it is confusing and grants over-broad powers with insufficient safeguards.

If the Bill were to be passed as is, the ISC’s report would offer a ready supply of ammunition to anyone seeking to challenge it on grounds of necessity, proportionality and legal certainty.

For the ISC’s report to be so frankly critical is somewhat unexpected. Under its previous chairman, the committee said little, and what little it did say consisted of bland reassurances that the security and intelligence services were doing a fine job. (See “pelted with marshmallows“, from just over two years ago…).

This Bill has been rushed through an abbreviated consultation period: the Home Secretary used the November Paris attacks to justify shortening the normal parliamentary process. The Bill’s consultation committee was given about 3 weeks of parliamentary time to conduct its expert witness hearings and consider any written evidence submitted, either side of the Christmas/New Year parliamentary recess. It is due to publish its own report on Thursday.

This puts the consultation committee in an interesting position. If its report is less critical than that of the Intelligence and Security Committee (which is, after all, the specialist in this area), its credibility will be called into question.  If its report is equally critical, the Bill itself will be even more deeply discredited.