Next 20 years: Responsible engineering and design

14th July 2020
The convergence of artificial intelligence, robotics, and big data is heralding a new Industrial Revolution. Every sector of society is being affected by these changes. Amid productivity increases, cost efficiencies and digital transformations, ideas around ethics and responsibility are becoming increasingly important, especially after numerous scandals and public breaches of data. But what does responsible technology mean? And how might this evolve over the next decade?

Imperial Tech Foresight met up with Professor Rafael Calvo, who is the Director for Research at Dyson School of Design Engineering and co-lead at the Leverhulme Centre for the Future of Intelligence, to hear more about the ethical challenges raised by new technologies and a potential way forward.

Today, what are the main assumptions individuals make about your research?

Many consider trust to be a technical problem that it can be solved through engineering alone. In my research, I have found that it is more of a socio-technical issue, related to the complex connection between humans and systems. To create genuinely trusted technology systems, we need to produce social contracts and systems that address users’ values, such as what is done with their data, or the level of acceptance of behaviour change.

Why is your research relevant now?

We are in the midst of the fourth industrial revolution, and we need to consider the impact that technologies have on us. Historically, during the first industrial revolution, both JS Mill and Marx published transformational works within a couple of miles from Imperial College London. Albertopolis was created to address the social challenges posed by the new technologies at the time. Imperial College London is a well-situated place to start critically considering the impact of new technologies in this industrial revolution. My research, as a design engineer, is about influencing the way we develop new technologies. The revolution that we are experiencing is about technology that helps us make decisions and tracks every detail of our lives. As part of a daily practice, technologists now face value-laden tensions between privacy and safety, efficiency and justice, economy and transparency, as well as questions that strike at the very heart of what it means to be human. To make technologies responsibly, we need to take a broad perspective exploring complex moral implications, far-reaching external impact, and global consequences. In response, professional organisations are now hurrying to develop ethical guidelines for the development of these new systems. But it remains unclear how they should be implemented and applied. Moreover, ethics, social impact, and wellbeing psychology fall outside the scope of traditional engineering practices so this requires a new kind of interdisciplinary collaboration.

How might we better design technological systems that support health and wellbeing?

It is a holistic question, and we need to make sure that technologies support the satisfaction of individuals physical and psychological needs. When successful, they help us with three key things: being autonomous, feeling competent and connecting with others. Technology that supports such basic psychological needs support both health and wellbeing.

Our perspective of technology has changed over the past ten years? How might it continue to change over the next decade?

The idea was that technologies would support our autonomy by creating products and offering choices that were not available before. But we have seen that this is much more complicated, and today they are limiting our independence rather than supporting us. As we described in a recent Nature Machine Intelligence article, this is partially due to business models and technologies that use the human as a means to an end, as a resource, rather than as the end in itself. This creates similar tensions as those seen in industries that use the environment as a resource. If we only see the natural environment as a source of resources for industrial products, and a repository for their waste, we end up with the environmental crisis we are experiencing today. When we consider humans as a resource, for example, “extracting” time and attention, we end up with products driven only by maximising the time we use them. This is the case of platforms using AI in recommendation systems like YouTube or Facebook. These values-led engineers and designers to spend their time devising ways of herding people towards the behaviours that align with this business model. But I think this is changing, and more companies realise that their business models need to align with socially accepted values, adhering to longstanding ethical principles that argue for maximising wellbeing, reducing harm, increasing autonomy and being fair to the humans using them. But to achieve this goal we need more engineers and designers that can create products and services where these principles are essential.

What are you hoping for the next 20 years?

I came to the College to advance our understanding of how to develop systems that follow ethical principles. This is not as easy as it sounds. Often engineers believe that technologies are value neutral, so, for example that an engineer only needs to worry about the accuracy of an algorithm. But as we challenge this opinion, we need to develop the design engineering methods that take into account values and ethical principles. For example, how can we create algorithms that are not gender or racially biased? How can we create workplace technologies that value the employee’s wellbeing? Or medical decision support system that minimize harm and are fair to all? I hope we learn to do this over the next 20 years – ideally much earlier. And that we find the inner strength to change the way we work, patterns that have been established since the first industrial revolution. I hope we find evidence-based approaches to understand how our relationship with computers changes us, and how we want to design them so we create a world that is worth living in – for everyone. And I am optimistic. The UK is brimmed with individuals and organisations who want to see this change happening. Professor Rafael Calvo is a Professor at Imperial College London focusing on the design of systems that support wellbeing in areas of mental health, medicine and education, and on the ethical challenges raised by new technologies. In 2015 Calvo was appointed a Future Fellow of the Australian Research Council to study the design of wellbeing-supportive technology. In July 2019 he joined Imperial College London and is now the Director for Research at the Dyson School of Design Engineering, co-lead of the Leverhulme Centre for the Future of Intelligence and co-editor of the IEEE Transactions on Technology and Society. Read more about his research here: The image does not represent Prof Calvo’s work but is from the Dyson School of Design Engineering’s End of Year Show and by Thomas Angus, Imperial College London.

What are your speculations for the future?