How do you decide who to trust with your data? Professor Robin Mansell from LSE on why protecting privacy online is so difficult

You recentlyco-authored a bookabout how artificially intelligent platforms are collecting and processing data – what inspired you to write the book?

We started with this question of why do these platforms grow as big as they grow? Is that growth inevitable? Why have state governments have just let them grow without intervening until now? One reason is because most governments are interested in a kind of technology race, and so the Americans have pushed and allowed their Silicon Valley companies to dominate the global market. We also talked about totally different business models, because the fact that you give your data for free to platforms so that they can nudge you into buying more things from their clients, is not the only business model in the world. There are collaborative, collective models, but very little investment has been put into them. And so no one really knows whether they could be sustainable or not.

Is public outrage what’s needed to make those alternatives a reality?

I think it’s part of the story. But collective action like that is usually stop and go. We see it in the environmental movement. It has an impact, certainly. But I think there needs to be a bigger impact whereby the institutions, whether they’re the courts or whether they’re regulatory agencies, actually get the message that they need to shape the behaviour of these data-collecting companies. They need to create the incentives where it makes economic sense for platforms to do business differently.

Do you think the pandemic has changed how people are thinking about surveillance and privacy? Are we prepared to accept more intrusion than pre-Coronavirus?

I think we might have been. But the trust that people might have been willing to put in government has been completely broken as a result of the A-level fiasco around the track and trace system. Why should people respect the notion that they should be monitored if monitoring leads to nothing? More illness, less kids in schools, people self isolating, because they simply do not know if they have the virus or not. Once you lose trust, getting people to believe in a system, which introduces more extensive and integrated data collection activities, is difficult. That said, and this has been true for the last decade or more, people express concerns about their data privacy in surveys but will then go and use these apps or platforms without thinking about the consequences.

What are some of the consequences of sharing this sort of data in the longer term?

One thing to bear in mind is the actual empirical evidence on whether or not sharing this data does affect outcomes such as voting behavior is ambiguous. There are some people who say absolutely it does, and other people who say no it doesn’t. But I think what is more concerning is the general way in which the proliferation of that kind of information changes the whole sense of society and public discourse. The notion of what’s “good behaviour” and “good speech” in a democracy starts to change and become normalised. I certainly see this happening in the United States – it’s normal for politicians to be uncivil, and it’s therefore normal for people who follow them to be uncivil.

That is problematic but it isn’t really to do with technology, in my view. It’s more about what are the behaviours that we find acceptable in society, and what are the behaviours that we don’t. I think that’s gradually changing as people being more used to a really fractured populism, which is problematic. The fact that we have as much information, misinformation or disinformation as we have is a symptom of those changing values and the changing notion of what our culture should be about and how to be civil to each other.

You’ve described the internet as a ‘runaway experiment’. What do you mean by that, and what is needed to bring it back in line?

I think the big question now is where will the investment come to develop new ways of doing things? My hunch is that if businesses do get the message, if they start competing on whether or not they protect people’s privacy, then we might be on a different pathway in the future. But they can’t just treat a fine from the Information Commissioner’s Office as a cost of doing business. That’s no longer viable. On the regulatory side, the oversight of the behaviours of these platforms needs to be independent, rather than an arm of the state. We’ll never get it perfect. But it seems to me that if you invest in those kinds of institutions that have that responsibility and mandate to think about a variety of interests, including those of citizens, then you have at least a chance of shaping the online world in new ways.
Your data should not be for sale. We’re taking Oracle and Salesforce to court for illegally selling millions of peoples data and we need your help! If you believe that tech giants should be held accountable for their use of people’s data please support our claim by “liking” our support button at the top of this page. We’re fighting for change, because your privacy matters.

Similar Articles:

Four Questions to Consider When Debating Potential Data Privacy Policy

Four Questions to Consider When Debating Potential Data Privacy Policy

In Defense of Privacy

In Defense of Privacy

Fight for Digital Privacy Rights Continue in Tech-Focused California

Fight for Digital Privacy Rights Continue in Tech-Focused California

GDPR’s first anniversary: A year of progress in privacy protection

GDPR’s first anniversary: A year of progress in privacy protection