Next 20 years: Is privacy dead?

14th July 2020
Over the past years, there has been public scrutiny of large organisations and their issues of privacy from media to singular whistle-blowers. As we allow more devices and systems into our lives; how might this impact us? What are emerging areas that might protect the individual? Imperial Tech Foresight wanted to find out more and met with Dr Hamed Haddedi to discuss the future of user-centred systems, privacy and human-data interaction.

Firstly, what are the primary shifts in and around privacy?

The main change is that large digital organisations are becoming pressurised from regulators. European Union, California and Singapore are particularly progressive. We are seeing these regulators taking a more advanced stance on privacy, adapting their systems to new technologies; an example of this is GDPR in the European Union. The days when you could get as much information as possible from individuals without anyone being informed is slowly slowly disappearing. However, we can’t stop there. Technology is continually shifting, and regulators need to keep up with the changes. As an example, we are seeing new devices entering our lives, that collect data through less apparent methods than our online presence. Often these almost invisible devices obtain sensitive contextual information, such as emotions in our voice, cultural interest, and purchasing behaviours by interacting with a range of touchpoints. Resulting in companies such as Amazon can understand consumer behaviour in a more detailed manner for some this sometimes feel like “prediction”. Over the next years, we need to define new regulations that address these novel systems, which can obtain sensitive data insight.

What is the most significant assumption we have on pervasive systems? 

Firstly, that people think that these systems deliver value for them. The benefit to the user is minimal. Often these devices have novelty features such as turning off the light or changing the temperature. It is like having an extremely low capability servant that performs basic actions for us. We see that individuals pay for the service, but get very little in return; instead, the providers get value from the insights the device gathers for them. Behavioural data can then be sold, i.e. what temperature you like in your room when you are awake and asleep and the emotion in your voice when you purchase certain products. The data on your everyday behaviour is sold very cheaply. We have 200 of these devices in the Systems and Algorithms Laboratory (SysAL), each of these devices communicates to tens of thousands of IP addresses, many which we don’t know where they are based and who is operating them – making them untraceable.

A dangerous assumption which some people have is that if you have nothing to hide, you shouldn’t worry about privacy. What they don’t understand is that if you take a genetic test, you are not only revealing information about yourself but also about your unborn grandchildren. In 50 years, this information could potentially be used in ways we haven’t imagined yet. Envisage your genetic data impacting whether you match with a spouse, or even if you are approved for health insurance. The technologies we start using today might have significant consequences for our and our descendants future. We need to think carefully about the potential unintended consequences that may arise from the use of these products.

Can we fully understand the unintended consequences of these new technologies?

We can’t. Applications that are being developed today will result in new behaviours from both organisations and users. An example is how 15 years ago we wouldn’t be uncomfortable with anyone recording us in our homes. But now people think twice about appearing in videos or being part of pictures, as we don’t know where and how the information might be used. It might not be our data, but it could be the image from others that impact us and results in adverse consequences. In the best case, these consequences only affect us financially, and in the worst case, they affect us socially. Small acts, such as jaywalking or passing through an area, might have detrimental consequences ten years later if someone has the information.

Do you think there needs to be further regulation?

What is in the regulation is pretty good. It is the enforcement of this regulation which has not been satisfactory. Companies can get away with serious breaches. The regulators often don’t have enough bandwidth to deal with all the organisation. ICO can chase the big companies, but finding the thousands of unknown data brokers in different countries is complex. It requires investigative research on the data network connection from the devices, which we do at the lab. It is only getting harder as the systems get more complicated. Even if the GDPR fines are hefty, the ability to chase is, therefore limited as it needs time and investment.

Emotional detection and privacy, what other technologies to preserve privacy?

It is easy to see technology as the answer. Although we are working on techniques to preserve privacy, we need first to create regulation that protects the consumer. Edge computing will play a significant role, including technologies, such as federated learning, e.g. doing analytics over encrypted data. The technology is a step forward, but it is not enough. We need to move away from the collection of data being the norm and put pressure on organisations.

How might we empower the consumer?

One area is in transparency, and researchers are exploring the way that consumers can receive clear insights into data-sharing practices through visualisations rather than Terms and Conditions. We need to create understandable systems that clearly describe the data practices of the devices we install. It would need to include the temporal aspect – how long is the organisation keeping the date; who is using the data and what is it going to be used for. At the moment, with GDPR, it allows companies to hold the data for a couple of years, for legal purposes. But there is no hard legal regulation that forces you to delete the data except for financial records and judiciary systems. These devices are operating on an “eat as much data as they can” model with the hope that they can extract something insightful in the future.

Secondly, there needs to be a certification for the devices that collect insight on us. We need a way to audit the devices that collect our information. GDPR is great, but it doesn’t verify the products, which means that reputable retailers sell devices that don’t adhere to GDPR. Currently, certification is obligatory for electric appliances and critical care medical devices, and the same should be valid for devices that collect your private data. If your data is leaked, it can affect you in many ways – mentally and physically. We need to see collaboration between regulator and scientists and distribution channels to create more laws and standardisation.

What about the future? 

Provision of better hardware resources and software capabilities are necessary for trusting the code and the devices which we purchase. Nowadays, there is a lot of untrusted devices and software which has not been vetted. We will see more options for trusted computer techniques that enable privacy-enabled analytics and encryption. This will include emerging standards of certification. For the next five years, better-trusted hardware and a secure way of doing analytics will be critical.

New designs that take the individuals and their understanding into consideration will also be integral. So that they can understand and manage how these systems use their data.

Dr Hamed Haddadi is a Senior Lecturer (~Associate Professor) and the Deputy Director of Research in the Dyson School of Design Engineering, at The Faculty of Engineering at Imperial College London

Photography from Dyson School of Engineering student showcase by Dan Weill, Imperial College London.

What are your speculations for the future?