Emily Taylor
Good afternoon, everybody. Welcome to this Chatham House webinar, The Power of the State Over Citizen Data Post-Pandemic. My name is Emily Taylor and I’m your Chair today. Just to let you know, this webinar will be held on the record and is being recorded. And also, just a little bit of housekeeping before we start, please submit your questions all through the event using the Q&A function, which you can see on the bottom right of your controls on the Zoom interface. We look forward to dealing with a lot of questions from the audience and hope that you will actively participate in this discussion today.
So times of crisis are usually a bad time to warn about the downsides to the response, and this has been very much a hallmark of the pandemic as well. People naturally want to help, do their bit and pull together and the ‘Cassandra’ voice, warning that all this data collection will come back to haunt us is not usually welcome at the time. And yet, historically, if we look back, human rights defenders have been right all along. If you think back to 9/11, and we marked the grim 20th anniversary of that event just last weekend, but the over-correction by the national security community, set in train ‘collect it all’ practices that 12 years later would be exposed by Edward Snowden and which in turn, has set the stage for ‘encrypt everything’ responses and complex privacy laws. In the background, of course, throughout this era, has been the inexorable growth in operational power and influence, both politically and technically, of a handful of large platforms.
Today, to lead our discussions, we have a really stellar panel of experts, who are going to give us their perspectives on trends in tracking, monitoring, digital surveillance. Both, from the perspective of what this means for the state, the power of the state, and also, those tech platforms, and mechanisms for accountability.
So, just before I introduce our panellists today, I would just like to just take a moment to pay respect to John Ruggie, who died a couple of days ago. His work on the UN Guiding Principles on Business and Human Rights, and his accompanying book, Just Business, was enormously influential and provided very practical, constructive pathways for co-operation between advice between business and civil society, who are often pitted against each other in these debates. But let’s move to introduce our speakers today.
We will start with Leonardo Cervera-Navas who is the Director of the Office of European Data Protection Supervisor, the EDPS, which is the data protection authority of the European Union, and he’s worked there since 2010, before being appointed as Director in 2018. Leonardo completed his law degree and Masters in EU Law in Spain and joined the European Commission in 1999, and throughout that time he has been working in the data protection field in the EU institutions. So, his role with the EDPS, he’s responsible for advising on data protection law and policy, in charge of co-ordinating and implementing strategies and policies of the institution.
Next we will hear from Gus Hosein who is Executive Director at Privacy International. Gus has worked at the intersection of technology and human rights for more than 25 years. He’s co-authored a book on identity systems and policy, and he has advised the UN HCR and the UN Special Rapporteur on terrorism and human rights. Last year, he was part of the UK Governments’ Ethics Advisory Board for the NHS ex-COVID contact tracing app, and I hope will we discuss many of those experiences today. Gus is associated with many universities and civil society groups, including the London School of Economics, Columbia University, and the University College London. And, in 2018, he was awarded the International Campion of Privacy award by the Electronic Privacy Information Center.
Next we will hear from Jessica Dheere who is the Director of Ranking Digi [audio cuts out – 06:22-06:44]. I have a feeling you may have just lost me and I can’t remember the last thing you heard, but I got lots of messages from Zoom there. So, I’m going to assume that I had introduced Gus, and I’m about to introduce Jessica, so I’ll start again.
Jessica Dheere welcome to the panel. Jessica is the Director of Ranking Digital Rights, an independent research programme at the think tank New America, that evaluates the world’s most powerful tech and telecom companies on their public commitments to protect user’s freedom of expression, privacy and other fundamental rights. Last year, she co-authored a report, Getting to the source of Infodemics: It’s the Business Model. Jessica had been a fellow at the Berkman Klein Center at Harvard and she founded the Beirut based Arab digital rights organisation SMEX where she launched the CYRILLA Collaborative cataloguing digital rights law and caselaw. She was an inaugural member of the Freedom Online Coalitions Advisory Network, and is a regular presenter on international internet policy events.
Our final speaker today will be Michael Veale. Dr Michael Veale who’s a Lecturer in Digital Rights and Regulation at the Faculty of Laws at University College London, where he’s been since 2019. His expertise sits at the crossover between computer science and technology law, particularly looking at the impact on fundamental rights of advanced data analysis, including machine learning and artificial intelligence. He previously worked at the European Commission, and holds degrees from Maastricht and the London School of Economics. He has advised the UN, the UK Government, the Commonwealth Secretariat and a number of professional bodies in the UK, as well as sitting on numerous advisory boards, including the Ada Lovelace Institute.
So, that’s quite enough from me. I will now, with great pleasure, hand over to our first speaker today, Leonardo Cervera-Navas. Welcome to the panel, Leonardo, and let’s hear your opening remarks today.
Leonardo Cervera-Navas
Thank you very much, Emily. Good morning, everyone. It is a great pleasure to be able to contribute to the discussions today, in this very prestigious forum, and with these distinguished colleagues, like you, Jessica, Michael and Gus. Let me just use just 20 seconds to present my employer, the European Data Protection Supervisor, the EDPS, as the data protection authority of the EU institutions, only of the EU institutions, not the European of – the data protection authority of the EU, that thing doesn’t exist yet. But nevertheless, we also have a role in the policy debate because we participate in the political discussions around data protection. Other institutions have the duty to consult us every time there is legislation with implications in data protection and often, not always, our opinions have some influence in the outcome.
Before we go the concrete topic of today, if you allow me, Emily, I would like to start with a, kind of, general note, looking a little bit beyond data privacy and the power of the state. And I think we should look at the big picture, which teaches us, humankind, that we have to do much better in the future because we have seen, with this pandemic, a self-evident lack of an effective governance model at planetary level, to deal with serious issues that are very serious, and that really need a united response.
We were unable to stop the spread of the virus, something that it shouldn’t have been so difficult, if we had, had the right tools and co-operation in place, and now, we are unable to ensure a satisfactory vaccination campaign beyond rich countries. Which is another failure that is – let’s hope not, but may bring us to square one, if the virus continues spreading, and things like that. So, it is pretty obvious that we need to change this, and we need to do it now. We cannot wait for the next disaster hit us. Otherwise this is not anymore a matter of politics, this is becoming a matter of survival. So, I very much hope that the public opinion put pressure on our national leaders, so we go beyond their national state mechanism, and we move forward to a truly – a true co-operation between the members of the international community.
Now, coming to the concrete issue of the technology and the pandemic, we have seen, that technology has come to rescue and humankind, and it has been used often, and we have no problem with that at all. On the contrary, our Supervisor, Wojciech Wiewiórowski, went as far as to say that, “If technology can help to save lives we have a moral obligation to use technology and to do data processing, when this is important for saving lives.” At the same time, we have reminded, developers, and Politicians, that there should be three principles pillar, which should, let’s say, ‘anchor’ these technology-goal solutions.
The first one should be that technology should be used as tools to empower, rather than to control, or stigmatise or repress individuals. Second, is that by no means, this technology should be considered as a ‘silver bullet’. Every one of them has some pros and cons and has – they need be subject to the trash fall of effectiveness, necessity, and proportionality. And the most important one at this stage, I think, you mention it already Emily, is that we need to understand that these are measures necessary in a time of crisis, but as long, or as soon as we get back to normality, these measure should be quickly disbanded.
So, we have been repeating this mantra over these lockdowns and pandemic, and I think we have succeeded reasonably, because at least at the you level we have to witness strong respect for data protection in the – by developers. We have seen serious efforts to put data protection in the centre, and that was a good thing.
But now it is time to put the genie back in the bottle. As you mention, we have the perfect example of 9/11 when we saw extremely intrusive surveillance measures being put in place that might have been justified at that particular point, when the Twin Towers were collapsing and thousands of people were being killed, but might not be so justified 20 years later. I am not an intelligence expert, but I have the feeling that the severity of the terrorist threat is not so severe as it used to be, 20 years ago. However, we haven’t seen a dismantling of those measures, and we have the risk that this pandemic is also used for an increasing surveillance with data that is terribly sensitive, which is health data, which is very intimate to the individuals.
So, as our boss, Wojciech Wiewiórowski liked to say, “Mind the endemic during the pandemic.” So, we have endemic problems in the internet that the pandemic has maximized, and we can – we need to also address these problems and the European Union is doing some efforts in this direction. I don’t want to be too long here in my introduction, but you are well aware of initiatives like the Digital Service Act, the Digital Markets Act, the Data Governance Act, and Future Data Act, that will probably lead to the development of a European health data space. Because in fact, this is one of the lessons I think we have learned from the pandemic, is that Europe, and in general the international community is lacking an effective European health data space, that can be used to help people and to fight these kind of catastrophes, as the one we witnessed.
So, in conclusion, we think that there is clearly still a lot of potential in the use of data and technology, but we must make sure, at this point in time, that all this technology is no longer used by the sense of urgency that we had 18 months ago, but rather focusing on achieving a safe and healthy digital future for the decades to come. So, this is my small contribution to the beginning of this debate, and I thank you very much for your attention.
Emily Taylor
Leonardo, thank you very much for those introductory remarks, a huge amount to unpack, I hope, in our discussions to come, but particularly the – you know, the way that you heard – you started with sel – the failure, recognising the failures of the international community, not just the in the narrow world that we exist in, but just more generally, to stop the spread. And the failures of international governance, which I think probably lie at the heart of a lot of the problems that are being created at the moment. I hope we can come back to those, and the three pillars, and also how to dismantle these things in the future.
But, let’s turn now to Gus Hosein of Privacy International and thinking about the vision of this safe and healthy digital future that we all want to create. So, Gus, take it away, and how far are we away from the safe and healthy digital future, would you say?
Dr Gus Hosein
Yeah. Leonardo’s contribution was reminding me of a conversation I had with one of our partner organisations last week, our partners from Colombia. And, just to describe PI works internationally with a network of partners who have been dealing with these problems in their countries since day zero, like everybody else. And our Columbian partner was recounting the fact that when I asked, “So did your country – did your government respond with an app, like every other government did?” Our partner said, “Yeah, there were 12 of them,” 12 different apps, and not only did they all fail, but more importantly for us to take away as a lesson, is that they all were an attempt to mask, essentially, deep political and structural challenges that existed, and that have not been remedied, but at least there’s a legacy of 12 different apps.
And this is why – like, the optimistic side of me thinks that the last 18 months, globally, has been a crash course, for everybody, on the wondrous and chaotic realities of tech, not the dreams that every government had when they were struggling in the early stages. Where struggling with PPE, they were struggling with the lack of testing capability, they were struggling with the lack of health professionals, and they thought. “Let’s mask that with an app.” I think we now understand what it takes for tech to play its role. And so, PI is documented with the help of our partner organisations around the world. We’ve documented ways that governments have responded.
And I just want to quickly identify in, like, very specific terms, the types technological responses we saw. In the first stage of the pandemic, governments were essentially taking a 9/11 stance and using the same toys that they’ve been building up for 20 years, including intelligence agencies. They moved to deploy tech to gather data around detecting the virus at a distance, and the most banal use of this were, say, temperature sensors that we saw widespread use to zero effect but, nonetheless, they made people feel comfortable as if something was happening.
But then tech became much more sinister around the idea of enforcing detention. I mean, quarantining. In some countries, the law between detention and quarantining was not very clear. A public health exercise in one case and a law enforcement exercise in another, and then there was the mapping of movements of people, with the hope that we could track this pandemic in a way that we track, I guess, terrorists, or for marketing purposes. And so, we had all the use of mobile phone mast data, essentially, call records, and movement records, to try to understand, and absolutely nothing.
We got these beautiful graphics that really made us afraid that people were congregating here, so therefore, the virus might be worse, but it wasn’t actually the case. So, those all failed to be useful to people who needed to know more about what was going on, and to public health officials. So, that’s when governments did change gear, and this all happened in a matter of weeks. This wasn’t, like, thought for months and reflection and evaluation. They moved to contact-tracing, which was a replacement of a public health response, and it was a very manual and caring process, into an automated process.
There were also apps to report symptoms. We all – in many countries we moved to testing at home, with the deployment of tests, for us to do to ourselves. Eventually, we moved to use tech to manage healthcare, and a good example of that is the certificates we got when we get vaccinated, or to manage travel. We started collecting data on locater forms that have varied implementation challenges, and now we’re seeing tech deployed to enforce vaccination. And, in this second generation of tech, the results are very mixed. The more you involve people, it worked out relatively positively, such as reporting symptoms, but the more you turn in to an enforcement mechanism, that’s when people started to push back.
And I’m not trying to be cynical about tech here, I’m just saying this was the reality, and when the reality really hit governments, and the public health response, and when, particularly governments, started waking up to the cost of enforcement being the largest cost of all of this, they basically just gave up on a lot of these measures and just turned it into a very boring exercise of collecting data. And it was extraordinarily lazy data collection, such as even just the home COVID test. How many of us really submitted the results when we tested negative? Nobody did.
And then, travel tests, what I saw, I travelled this summer, and what I saw, when I travelled across numerous borders was, although they all wanted travel tests to occur, they all wanted documentation, not a single government checked. Airlines haphazardly checked. It was all lazy, but nonetheless, this data is sitting somewhere, waiting for something. And, as we saw in the United Kingdom, there were apps and pings, and ping-demics, and eventually deletion. But, nonetheless, all this data is going into data stores, and at some point it’s going to be used and that’s my final sentence, it’s going to be on this. The stark worry, the danger we have going forward is, and the unforgivable mistake governments did, is that they started treat public health data, or data created in response to a public health issue, as just, yet another data store.
That they can do as they did after 9/11, as they do after every law enforcement challenge, they can treat as well, “Let’s just use the Data Protection Act, let’s just get access to this data for what we see fit,” and that is unacceptable. And we’re starting to already see the costs of that, when we see that lack of public trust in health systems, or even, lack of public trust, dare I say, in vaccines. This all contributes to this moment in time where people are lacking confidence, and that’s such a pity. At the end of – like, not the end, but as we approach, hopefully, the final phases of the pandemic, there should have been a huge celebratory moment of public confidence and trust in our institutions, and instead, we’re in this mess. I’ll give it to the others to pick up on that.
Emily Taylor
Thank you for that uplifting note to end with, Gus, but, you know, you’ve covered a huge amount of ground as well, like Leonardo, but the, you know, the plethora of apps that popped up, and the sort of – you know, anyone who touches data science in their lives knows, what you often do is collect what you can rather than what is meaningful. And that’s the response you – exact response, partly panic, you know, it’s easy to look back with hindsight and forget how terrible and how frightened, you know, everybody was, and how nobody really knew what to do. However, you know, the, sort of, the second phase that you’re describing is something I hope we can explore later and those mixed results, the lazy data collection, and what the hell is happening to all this data that we’re busy collecting, that nobody’s actually accessing or using yet.
I’m delighted to see two questions already in the Q&A. Thank you Americ Conrey and Nicolas Webb for posting those Q&A. If you like the questions, you can vote them up, and please do ask – continue to ask questions in the Q&A, because after we’ve heard from Jessica and Michael, we will be coming to a panel discussion.
I’m going to turn now to you, Jessica Dheere, from Ranking Digital Rights. Working at the intersection between civil society and business, and really looking for meaningful measures to impose some accountability on the incredible power of the tech platforms that Leonardo and Gus have touched on in their remarks. But thank you, the floor is yours.
Jessica Dheere
Thank you, Emily, and thank you so much for having me, and it’s really an honour to be here. So, I’m the Director of Ranking Digital Rights, where we advance corporate accountability for human rights by setting international standards for the way companies, especially big tech should up hold their rights obligations. We produce rankings of these companies, hoping that they’ll compete on the level of human rights compliance that they are able to present in their policies and practices, and we analyse our evaluations of companies, hoping to produce some actionable insights on companies’ human rights performance, for policymakers, investors, and other civil society organisations.
And what I would like to say here is that I don’t think we can consider state power any longer, without considering the power of companies, and particularly the big tech platforms. But also, in the context of COVID where we have seen every human action being pushed into an online and virtual space. It’s clear that there’s a consolidation of power in companies, and to echo Leonardo’s point, we need new governance models to deal with this.
Companies, as we all hear, can be just as powerful and influential as states at this point. And now, as we saw, well, with COVID, and also previously, I’m so glad that we’ve introduced, sort of, the counterterror crisis, as the predecessor to the health crisis. Because I think, on the face of it, people might think they’re two entirely different things that should be treated separately, but they’re actually crisis responses where we’re pushed into, sort of, societally into making compromises with our rights in order to deal with, what seems to be a clear and present, sort of, urgent problem. And I would – so, I’m happy to see that.
And so, what we’re seeing is that, you know, one of the first places where we saw a lot of public-private, sort of, collaboration was around the anti-terror efforts, particularly after the Snowden revelations, when ISIS was such I threat. And now, we’re seeing public-private partnerships, sort of, consolidating between governments and companies, to deal with the health crisis, and they’re being done, sort of, in the same secretive manners as before. We don’t know exactly what the agreements are; we don’t know exactly what data is being shared between these entities, in many cases. And so, as Gus was saying, you know, it’s a global crash-course-in-tech, and in my mind it’s also like this, ‘global data grab’. Like, “Let’s get what we can and see what we can do with it, or maybe we’ll need it, and so let’s collect it anyway.”
So – and, you know, one of the first things that we saw the Trump administration do was to call Tech Executives to the Oval Office. It was a private meeting, and we continually, sort of, see these sorts of things. That was in the, you know, first or second quarter of last year. And then, sort of, in July we see that big tech, Apple, Amazon, Google, and Facebook have posted record profits and during a pandemic that was expected to, sort of – or that resulted in a downturn for so many other business and individuals. But then, what happens then, is that all of these – all the data that these companies are collecting, and the profits that they’re generating are as a result of that data and advertising, the fact that all of our activity has been funnelled online. And they’re now creating a, kind of, an economic power as well that states are certainly not going to ignore.
So, I’m thinking that – what we have seen with COVID is that we’re seeing an evolution of governance and that we need to look at it that way and not think of it – think of the stark governance as more distributed. We have company power, we have state power, and we’ve got to elevate now our civil society power in these interactions. It’s going to look a little bit different than governance has traditionally looked, and we need to understand, sort of, what our tolerances are for the compromises that we make in these situations, and what the triggers are for, sort of, understanding what the problems could be. We all know that states of emergency are, sort of, frequently used by authoritarians to suspend democracy and citizen participation with the justification that to protect public safety and keep things from getting worse, we have to act quickly without public scrutiny.
I think in a digital age we need to re-define public safety as more of something that the public is actually involved in figuring out, “What makes it safe?” That is the real promise of democratization of technology. I think we all thought it was going to happen one way, but I think we haven’t achieved that. So – and I think what we are talking about also is, short-term considerations versus long-term considerations. Yes, public health surveillance is necessary for understanding outbreaks, but we need to understand also, and we need to make evidence-based decisions on efficacy of any, sort of, technology intervention, we need to perform rapid analysis. We need to do human rights impact assessments on these applications before they are deployed, and to understand, sort of, what the trade-offs are. And soon as soon as we start to see that the data says that they’re actually not helping, as we’ve seen in a lot of contact-tracing, sorts of implementations, then we need to start over, or reconsider our approach.
So, I guess, not to take too much time, ‘cause I think I might have already spoken for five minutes, I guess we’ll talk about what some of the solutions are, and I’ll just say that another place is that with all of this, you know, this is a very, sort of, simple comment. But with all of the data that’s been collected, I think data breeches are also much more likely to happen, and one of the things that we’ve seen in a lot of the apps and other sorts of application or just applications that have come up during the pandemic is that the policies, which is what RDR, Ranking Digital Rights evaluates, are not addressing, data breaches and other, sort of, essential access to policies, accessing in the right language, etc., and that there’s a lot that can be done, simply to prepare for crisis in the future. And so, I look forward to the rest of the discussion.
Emily Taylor
Thank you very much, Jessica, and of course Ranking Digital Rights is one of the – I felt like I might be wrong, but it felt like a response to the Snowden revelations, and the realisation of the power and lack of accountability, and so, you know, it’s great to have you here today, and that’s a very admirable process and bringing together business and civil society indicators. Maybe we can talk later about, you know, how that governance can evolve, as you saying, and others are calling for, and that sort of that role that others have spoken about as well, that you mention about, the power of involving people and what that looks like, and what the barriers to that might be.
But let me first turn to Michael Veale from University College of London. Michael, I’m right in thinking that as well as being a privacy expert, you were also involved in the development of a protocol, in the very early stages of the pandemic, to try and influence app development in a rights respecting way, and maybe you can touch on that. Very interested to hear your remarks, as you introduce this topic. Thank you, Michael, the floor is yours.
Dr Michael Veale
Thanks, Emily. Yeah, so back in March last year, it’s easy to look in hindsight and look at different trends, but I think a very big a very big concern, in March, and among myself, colleagues we were talking to Privacy Technologists, Wireless Technologists, Lawyers and policy experts, Epidemiologists, we came together, all very concerned that what was being developed, was being envisaged, was highly individualised technical solutions that were very open-ended in nature that would apply a really problematic standard across the world. They would do a few things: would create a dataset of ‘who saw who’ in society, that would be a huge desire to mine in future.
This is the kind of dataset that you already have from, you know, GCHQ and NSA, but can’t really be analysed for your daily policy optimising society, and many countries don’t have these, kind of, datasets in this fine granular detail. And that this would be a platform effectively being built of population management for a pandemic. Highly individualised population management, very coercive individualised solutions, that would head towards a pretty discriminatory world.
This was being sold, however, by some governments. Singapore was really early in this approach, as a contact-tracing app. But when you looked below the hood at what this would – what you could build on this, it was not a contact tracing-app. It was a platform for further interventions.
So, we – in line with privacy by design and privacy engineering, and the like, we got together in March, the colleagues and I, and we built very rapidly, an open source alternative. And said, “If you want to do this, if you want to do this Bluetooth contact-tracing, you are talking about that, people are talking about, there is a way to do it without centrally collecting any data at all. And there is probably quite a robust way of doing it that meets this criterion, and you can pledge and say, “This can do this and it cannot do more.” And we worked heavily with Epidemiologists, we worked heavily with different – some public health agencies, although it was also, they were very busy at the time. So, this wasn’t exactly a conducive lockdown atmosphere for, kind of, some big public debate about things. When you had to actually design something on a table, so that at the moment of decision for a policymaker, you said, “Well, you could do that.” Or, there’s this fully-fledged, mature, ‘cause always talked to – privacy technology is always, kind of, you know, dismissed as immature, technology on the table.
We then also had to start talking with platforms, like Google and Apple at this point, and Google and Apple then later based their exposure notification protocol, on the protocol we developed, DP3T. This was an interesting mix, because these phones in our pockets are not effectively owned by us, you know, insofar as these companies can deploy any technology they want on them and control the way they operate and work.
As it, sort of, moved forwards, you know, this, kind of, got quite intense. Some countries were really heavily heading towards what we’d developed, this decentralised approach very early on. Some countries were captured by industry, they were also were worried that their, like, as Jessica said, in a downturn they would lose a lot of business, and they saw centralised data storage and management and analysis as a great business opportunity for them, particularly in Germany. So, Germany became pushing a pretty non-data protective protocol, as a result of industry capture from a few companies that were very close to the Chancellery, and various other dynamics played out.
You know, diplomatic letters between these countries that, you know, I was, sort of, seeing some hints of these things happening from being at the scene. You know, some countries threatening others and saying, “You won’t get a bailout if you go for a decentralised system.” Like, this was turning into some really mad and intense debate at this point. But what it really shows, in the end, what happened? Well, it was important to get trust for the deployment of this contact-trace technology, if this is the technology we wanted at all, because only if people could be assured it was rights-protecting, could you avoid it being mandated on peoples’ devices and keep that freedom? And so, not collecting data was really key there.
And in some countries this end up – this trust ended up working and paying off. Even just by early hiccups in England and Wales, you know, that was where we saw huge adoption. That’s where we saw studies saying we saved thousands of lives. Pretty robust studies in nature and actually, you can scrutinise them, and they do stand up to scrutiny. In many countries, trust was lost at an early stage. This debacle of discussion, like, led to this, you know, this not really taking off. And despite what we were telling at this point, after developing the protocol, most of these countries, you need to integrate it now into your health system. Like, the base technology is there. You need to provide people with codes, to do this, you need to integrate it further. That wasn’t happening, and that gap really led to a lack of the use of adoption in many places. And in the US, for example, if was a non-starter because of the fragmentation that was built into the states to begin with, effectively.
So, where does it leave us? The lesson from this though, is that the state lost quite a lot of power here, or it was illustrated it didn’t. Because when Google and Apple decided to do this, they also, Apple, in particular, foreclosed the ability for states to use other types of software, which – and before they made the decision, Google and Apple, nobody could use any kind of software that would work in this way. Android phones – countries that only had Android phone, just about could. Other countries couldn’t.
The other thing this shows is that this isn’t about data, right? We developed, arguably, one the most privacy preserving systems you could imagine and envisage at this point, that was simple and did not centralise data. It still, has implications of control. This still, is a protocol that determines how people act in society, and avoid people that come close to people. In the end, it was just notifying people, so that wasn’t legally binding in almost any country, that notification.
But still, this isn’t about data. So, when we think about the state and data, maybe we have to think about what role does individual states have over international companies and how could that be mediated in line with surveillance law as well and other worries we have about human rights? And – but this isn’t about data, it’s about protocols. Because would you give companies the ability to run anything they want on your device? They can do all of the analysis that is as invasive as you want, without centralising any data, closing their eyes to it and saying, “We can’t see that.” In this case, it was public health, and we had some steer over this, in the next case it may not be. So, let’s abandon data, let’s abandon discussion of just states and people, and get a bit more in-depth.
Emily Taylor
Thank you. Thank you very much, Michael, and all of our speakers for introducing the topic and covering such a wide range of issues. Michael, you talked in some depth about the exper – your experience with the contact-tracing apps and Gus picked up a few points on this as well. An important trend, I suppose, is the move to a technical solution from – of something that had been primarily been a public health intervention done by very skilled individuals. Yes, maybe it wouldn’t scale in a pandemic, but it seemed to have gone on perhaps completely orthogonally to those communities and without much involvement.
It’s safe to say the Q&A is going a bit crazy here, so, rather than going through my questions to you, I think it’s probably going to be more fulfilling for everybody if I turn to the audience questions and throw them at you. But because of the webinar format, and the constraints of time, I think, if I may, I will just read the questions and summarise them, to the best of my understanding. Maybe we can take just take a couple at a time and then, not everybody has to comment on everything, but just to get, you know, anything that you would like to react to. So, I’ve got a question here from George Berry, who asks, “To what extent did the constraints insisted on by Apple and Google with their exposure notification system, limit the privacy implications of these systems in the field?”
And then, there’s another appy question from Americ Conrey, who asks, “Thinking to the strategy to contain the pandemic, will the risk of a ping-demic be counterproductive? And of course, in the UK, we have experienced a ping-demic as well. I mean, users will not pay or will not any more attention to messages coming from their mobile phone. How do you keep an effective alert system when people are getting more and more used to the risks?” And I think this is a, sort of, a classic noise-to-signal-ratio type of dilemma, isn’t it? So, could I just pause there and see who would like to come into that, into those debates? Would anybody like to kick us off, or shall I pick on someone [pause]?
I’m going to go to you first, Jessica, and then, let’s – then other people can get their thoughts together. I think that your – this goes to your comment, that, “This has never been such a profitable period for big tech,” has it? And, I wonder, what the – you know, in addition to those questions about the app, what does it benefit big tech, those providers, particularly the ones who build the phones, who want to keep us in their environments, even if it seems very privacy respecting, and Michael this, I think, goes to your work as well, what can they see in the end? Is it actually just creating a walled garden in the name of privacy-respecting implementations? But Jessica, maybe I could start with you.
Jessica Dheere
Sure. I mean, directly, I think I’ll leave the direct question on the efficacy of what Apple and Goole built to Michael. My understanding is that the protocol that they used stays locally on the phone and the user has much more control over it, and that it has been adopted by a lot of state governments here in the United States, etc. And to me, at least, has seen – seemed to be one of the better solutions out there, although probably not perfect. And in terms of what it does for Apple and Google, I think, you know, these tech companies, you know, they’re all profit-oriented, I mean, they are public companies that are here to make a profit. I think they also, you know, legitimately want to do public good and I think the line is that, you know, we all want to take advantage of the affordances that technology brings, but we have to do that without compromising our rights. Because if our rights are compromised, then we can’t really take advantage of things when we need them in the way that we want to.
So, I’m not 100% sure. I mean, we – of sort of beyond that, what they’re – I mean, they want – Apple and Google are both trying to position themselves right now as the, you know, protectors of our privacy. Apple is – you know, has long considered, you know, privacy. It says in its public materials, it considers privacy a fundamental human right. There are still a lot of question and Google also has a human rights policy,–but there are still a lot of question on, sort of, what the public messaging, sort of, PR benefits are of these privacy regimes, and we see with Google’s – you know, they’re sort of making a transition now. Apple’s always been very much a first party data holder, and Google’s been third party and it’s, sort of, making transition now to say, you know, to the Federated Learning of Cohorts. So, I think all of this has to be calculated at some level, as a part of the overall, sort of, positioning as, you know, safe guarders of our privacy, but, you know, I’m very – I have to be sceptical as well.
Emily Taylor
Thank you, and, Leonardo, I mean, you were talking for – about new governance models and also the three principles that should anchor, you know, including the empowerment rather than control. Michael talked about these apps primarily as a mechanism of control. From your perspective in the European Data Supervisor’s office, when you look at the power struggle that ensued between big tech and democratically elected EU states, it didn’t look like democracy won on that, on the app battle. It looked like the – and Michael mentioned this, that the big tech companies were really able to impose their policy solution on countries, is that what you meant? Is this what we all mean by the new governance model that should emerge?
Dr Michael Veale
Most certainly not, certainly not. The new governance model has to be much more democratic than the management boards of corporations that are moved by profit considerations. But I must say that in the crisis moment, we were all leaving, because of the lack of any meaningful governance. And as Michael explained, there were thousands of people working totally independently without any sense of direction, disorganised, with 300 ideas, and things like that.
We welcome the move that Apple did, to take some social responsibility and put on the table some proposal that any people could take, and our Supervisor was very vocal about it and we welcomed that. Not because we are in love with these companies, as you can imagine, but I think it was a way to put some order in this huge chaos that we experienced. And, in fact, I think this is what we have to do, I think we have to apply simple management principles to the international community. So, we have to do like we do with our teams as work, and we have to do what is right, and we have to do things right. And in this crisis, nothing of this was respected; everybody was acting completely disorganised and chaotic way.
I don’t know. I guess, in the UK, it was the same between policies. I was – I had my kid studying over there and it was a different policy in Edinburgh, from Glasgow, and from London, and in Spain it was the same. I mean, you cross a border between Málaga and Grenada, and then, there you could stop, or you can stop, or you could – that was a total disorder. So, I think, for then, for the future we need to have a little bit more governance, and there is a time for debate, and there is a time for action. And when the debate is over, then we have to act, and someone has to have the power to take some meaningful decisions. Of course, with democratic controls and so on, because the alternative is simply what we witness, which is a total chaos. Thank you.
Emily Taylor
Thank you. Or, I suppose a question of learning by doing in the mist of complete panic. Michael and Gus, I’m sure you want to come into the specifics of this question. Can I also add in a couple of – well, just one other element that’s being raised here my Nicola Webb, or Nicola Swebb, “I don’t know if either of you have any commentary on the Estonian ID card as a potential model, which provides useful data to government, while also empowering individuals over the control of their data?” And Nicola is suggesting, you know, maybe this could be a model that could be replicated. But I’m sure you’ve also got a lot to say about the first two questions. So, maybe can I go to you, Gus, first, and, then you, Michael? Thank you.
Dr Gus Hosein
Yeah. So, to address the ping-demic spec, so just to be clear what the ping-demic was. It was when the apps started to notify people a lot about the fact that they were exposed to people who were – had tested positive. This was the moment that essentially broke the app model, which was an app was always there to – for calmer times, and when there was a lockdown, for instance, then, and there was high-risk outside. When you went outside, you would at least be notified. Ping-demic occurred when there was a high prevalence of the virus out there, and yet we were unlocked, and the only thing that we had was this app, so of course the app was going to be nuts, and just go crazy. But it was essentially being expected to respond to government policy, not public health policy, but government policy, and so that’s why the app essentially couldn’t withstand that kind of moment.
So, it’s only fair that it became a problem, and that sucks, it really sucks, because you want an app that people trust, that they will use and they’ll adhere to. Not an app that they’ll delete, or ignore when it becomes inconvenient. And I don’t mean inconvenient ‘cause it stops you from going to the pub, I mean inconvenient because it stops you from going to work while everybody else is going back to work, and your employer expects you back to work, and will they believe that just because you got a notification on your phone, that, all of a sudden, you should be sitting at home. That’s not – society’s not ready for that kind of conversation on the back of an app.
I’ll leave it to Michael to discuss the Apple-Google thing, but on the Estonia example, just to be clear, Estonia has this extraordinary ID system and often, other governments look to it as a model. The problem for making it a model for any real country with a diverse population, amounting more to than a million people is that Estonia is not a diverse population and has an extraordinarily small population. So, a small solution for a – a tight solution for a small environment, versus how do you deploy an identity system, as we’ve seen with vaccine passports, or any type of passport, how do you deploy that and enforce that across a very diverse environment? Where there will be people who are enfranchised and disenfranchised. There are people who want to be part of a system and don’t want to be part of a system. Where can you possibly check these IDs everywhere that government wants to check it versus where businesses want to check it, versus where people actually want to disclose? And Estonia is not built for that post-Trump, post-2016, post-9/11 world.
Emily Taylor
Thank you very much, Gus. I’ll turn to you now, Michael, and can I also just add to your bucket and your brain a question from Emily Harding of Chatham House, which is, “What would you consider to be an acceptable level of government interference in health data? And are there examples of a healthy balance of interference and personal data protection?” I think you’ve talked about how you tried to do that with your protocol, and that it’s just, maybe a reflection, as well as your responses to the other questions on – what have you seen working well, that you would highlight?
Dr Michael Veale
Actually, I think I can blend that in quite easily, actually. So, the first thing, one of the questions was, “Was it good for them?” Well, that’s the kind of idea anyway, Apple and Google. This exposure allocation system was not good for Apple and Google. They did – and this was not a profitable system for them. This is not what they wanted to do, actually. I think they really wanted to stay out of this to begin with. What’s interesting is, this reveals a capacity they have for – and they didn’t really want to reveal, for orchestrating protocols on devices. The sheer, like, power they have to actually make global systems function in certain ways, which was a hand, with these cards in their hand they did not want to reveal. It wasn’t revealed at their own time. Yeah, Apple was forced to make a roadmap, a public roadmap, they’ve never done that before with any technology. This was uncomfortable for these companies, which was interesting, and of course, it brands it in a certain way for them, but that’s not really the main benefit. What they’re trying to avoid through this, are governments using their platforms in ways that are unhealthy and unsavoury, and that’s when we get to this question of with the ping-demic, actually, as well. The ping-demic – well, actually, I’ll come to the ping-demic in a second. But one of the core challenges with protocols, is that if Apple and Google had to do something, right? If you wanted to have any apps at all, like, they had to unlock their system and make it significantly less of a barrier to use as sensors on the phone. And this idea of unlocking the system, of course, that has global effects, some countries without data protection laws, without – you know, countries with very limited rule of law, in many ways, can abuse that power and actually turn phones into any kind of governance mechanism they want. They can use them to quarantine, send people home, do X, Y, Z.
So, if the answer was, “These companies should step back and let states do what they want with their citizens’ phones,” then that’s actually, quite – we could maybe control that in Europe, maybe control it with Council of Europe signatories and so on. We can’t control it in many countries, so it’s a legitimate choice of what we want these phones to do. Who should be controlling them and what is the nature or legitimate decision? In a global context, right, because these are global infrastructures. So, when we think about governance and state-control, that’s a bit different.
And the last quick thing I want to say about the ping-demic here, and that really in – ping-demic is because, as Gus said, it’s a clash of public health policy, and what these apps could do and be developed for. The ping-demic was a result of no testing capacity in the UK. That England and Wales could not say, “Take a test, don’t self-isolate for a PCR,” because that would overwhelm the testing system. Therefore, they left it as isolate, because they wanted to opened up. These were incompatible goals. So, all of these technological factors really impacted, and maybe because you might want the state to have more design power over theses apps to be able to have more options. But if you give them more design power, all states maybe had to have more design power, then you lead to those illegitimate outcomes, which are endangering and infringing human rights.
Emily Taylor
Thank you very much, everybody. I knew this would happen and we’ve got far too much to talk about for the timeslot we have available. I could come back to each of you many times, and I was just thinking in my head, like, “Do I have room to use the last two minutes to give you a new round?” And I probably don’t I’m afraid. But I’m going to try to ask you, like, in 20 seconds, no more, could you just – let’s think about the road ahead, what single thing would make things better to dismantle this surveillance architecture that we have put in place? Can I start with you, Leonardo, then Gus, then Jessica, and then finally, with Michael? But please, very, very brief, for the interests of time. Thank you very much.
Leonardo Cervera-Navas
Thank you. I would call for increased co-operation between the EU and the United States to start with. And also, a more concentration of enforcement power in the EU to be able to face these big technological companies in a more effective way. Thank you.
Emily Taylor
Thank you very much, and thank you for being so concise. Gus.
Dr Gus Hosein
The best organisation on the planet for these issues is one based in the United Kingdom called medConfidential. Go to their website and look at what they call for, for the future, but essentially, it’s three words: any system should be safe, consensual and secure. As a result, the existing systems we have, a lot of them have to be shut down.
Emily Taylor
Thank you. Jessica.
Jessica Dheere
I guess the one thing that I would say is that I think we need to prepare for a crisis in advance and we need to measure that preparedness. This will happen over and over again unless we, sort of, understand that preparing for a crisis should be, sort of, part and parcel of daily life and business.
Emily Taylor
Thank you very much. Michael.
Dr Michael Veale
Look at infrastructures and not data. Look at global agreements, which have to have such a legally-binding effect they reach into public security aspects of a state, very difficult. And, don’t just involve the US and the EU, the Global South is where this is really playing out, these protocols affect them, arguably, the most, and the underlying systems. And they are not at the table, and no-one is inviting them in.
Emily Taylor
Thank you very much. I’d like to thank all of our panellists for your really thoughtful and well-informed contributions today, and for the way that you have responded to the questions. Thank you to the audience for your questions and participation. I’m going to bring this session to a close now, as it’s the end of the time. I would love to continue debating these topics with our panel and with you all. Thank you very much for attending today’s session and I hope we can continue this dialogue into the future. Thank you.