What is the future of Electronics?

17th November 2021
A new kind of electronic architecture, called the memristor, is paving the way for what might be next in electronics: Neuromorphic computing. It promises a new way to compute that is fast, energy efficient and allows to build truly intelligent hardware. But what is it, and how will it be used?

Computation today

Traditionally, the processing of data in electronics has relied on integrated chips featuring large numbers of transistors, which are microscopic switches that control the flow of electrical current by turning it on and off.  Moore’s law proposes that the number of transistors that can be put on a microchip doubles about every two years.

The increase in computing power in the last few years has unleashed artificial intelligence and machine learning using neural networks.

As the number of transistors increases, the amount of computation that is required to be completed  also increases. However, increasing the number of transistors on a microchip in future development is constrained in two ways:

– the manufacturing ability to fit larger volumes of transistors in smaller chip areas

– by their energy consumption.

In fact, energy consumption for training AI models is tremendous and poses a serious environmental threat. The main cause for this is the energy-intensive data traffic between memory and processing units that are separated in conventional computing architecture.

New call-to-action

As our appetite for data and information keeps growing, we have a bottleneck on the way we currently perform computation. How can we balance the need for processing more data more quickly while also being energy efficient in a world that is transitioning to net zero emissions?

The Memristor: a new electronic component unleashing a new way to compute

Memristors are a new generation of electronic components, realised for the first time in 2008.

They are the fourth basic element in electronic circuits after resistors, conductors, and inductors. Memristors are simpler than transistors, they are smaller, consume less energy and have the capacity for in-memory computing.

Memristors store and process information in the same location. This co-location reduces the bottleneck for computational speed and energy consumption for conventional computers. This opens the way for more energy efficient computing.

Brain-inspired computation

This quality of being able to process and store data in the same location means memristors work like human neurons. For this reason, memristors are used as building blocks for neuromorphic computing.

Neuromorphic computing is a new paradigm of electronics that mimics the internal neuronal structure and the energy-efficiency of the human brain, with its capacity to perform parallel computation. Its goal is to put a brain on a chip, embedding artificial intelligence in the hardware itself, rather than running it on a software.

Giving birth to intelligent hardware

Neuromorphic computing powered by memristors could herald a new generation of chips that are able to self-learn, adapt to various contexts, and interpret the environment around them without requiring instructions in code or previous data. This provides real-time intelligence with continuous, onboard learning, with a lower energy demand than traditional transistors.

Currently, artificial intelligence mainly runs through cloud services. Neuromorphic computing powered by memristors means that AI can be taken to another level. Rather than on central servers in the cloud, AI can run on the network edge. This means the realisation of Edge Computing where computation and data storage are closer to data sources.

Autonomous cars for example, are based on neural networks and 4/5G technology. For a car to drive autonomously, it must be connected to a data centre that analyses the data it receives from the car through neural networks and then sends it back. This causes latency and heavy energy consumption. With a neuromorphic chip, all processing could be done locally, inside the “brain” of the car.

The same is true for smartphones, whose artificial intelligence capacity depends on cloud-based processing. With a neuromorphic chip, all computation could be conducted within the device itself. This would also mean an increased security of data, as none would be sent to external repositories. Just like your own mind, the technology could contain ‘thoughts’ or ‘experiences’ that are unique to the device or application itself.

Applications

The technology is still in its infancy, but companies are heavily investing in it. Intel, for example, has just released its Lohi 2 neuromorphic chip. It has been already used in applications like gesture recognition for autonomous cars, machine vision, robotic arms, neuromorphic skins and olfactory sensing.

Neuromorphic computing has the potential to unleash a new wave of AI applications on edge devices, robotics and IoT (internet of things) systems. Anywhere in which data needs to be processed in evolving real-time environments.

Some example applications are:

 

  • Truly autonomous cars and personal robots that are able to infer and learn from their environment without needing to be connected to a network. This in turn means being able to adapt to different contexts, without the need for instructions from a network connected resource. This saves time, increases personalisation, and allows context-specific AIs to mature.

 

  • Smart sensors in factories that can process data around them in real-time and take complex decisions to trigger immediate actions. This reduces the time it takes to react to an emerging situation, and therefore increases productivity.

 

  • Implanted medical early-warning systems that can adapt to a patient’s state as it changes over time. This increases the ability to monitor patients in case of acute medical needs and initiate independent interventions.

 

  • Extremely low-power and always-on detection systems for speech embedded in homes. Rather than network-connected personal assistants, the house becomes a resilient personal assistant with fewer privacy issues. Ambient intelligence can become a reality.

 

  • Developing hybrid AI systems of central or cloud-networked and local self-standing ones.  The central AI learning from interactions periodically shared from AIs at the edge.  The AIs at the Edge learning from central AIs about insights from other edge AIs, regulation compliance changes and other environmental changes like weather patterns. The emergence of this approach to AI is much like how society has developed, with circles of power and influence and feedback between the centre and wider populations.

 

It’s difficult to say how the technology will evolve in the future, but some go very far, calling neuromorphic computing the path that could lead to AGI (Artificial General Intelligence). Time will tell.

Research at Imperial College

Various groups at Imperial College London investigate the use of memristors for neuromorphic applications and next generation electronic devices.

Dr. Niloufar Raeishosseini for example, investigates memristors for IoT devices, smart medical implants and wearable devices.

The project FORTE lead by Professor Themis Prodromakis aims to study memristors to rejuvenate modern electronics to allow pervasive IoT computing with low consumption and to realise smart implants that can interface with the brain.

How can I learn more about memristors and neuromorphic computing?

Are you interested in this future development and what it means to you and your organisation?

Check out ” Convergence“, one of our recently released scenarios for 2041 in collaboration with Imperial College London academics

Imperial Tech Foresight is foresight backed with the scientific community of Imperial College London. Get in touch to learn more about the possibilities, challenges, and opportunities ahead with such emerging technologies.