Transforming surgery with real-time multi-scale imaging

6th February 2016
Every successful operation depends on surgical skill to navigate the body. Pre-operative scans play a critical role, guiding surgeons like a map. But these are just snapshots. What if the map could be updated live – and even zoomed in and out? Real-time, multi-scale imaging would give surgeons a ‘sat nav’ to precisely identify tissues, protect critical organs and even see beyond their scalpels before a cut is ever made.

Dr Stamatia Giannarou
Lecturer in Surgical Cancer Technology and Imaging, Department of Surgery & Cancer

Every successful operation depends on surgical skill to navigate the body. Pre-operative scans play a critical role, guiding surgeons like a map. But these are just snapshots. What if the map could be updated live – and even zoomed in and out? Real-time, multi-scale imaging would give surgeons a ‘sat nav’ to precisely identify tissues, protect critical organs and even see beyond their scalpels before a cut is ever made.

Placing sensors into surgical tools, and fusing data from multiple sources, promises an entirely new imaging toolbox for the most delicate cancer and neurological operations. Equipped with these tools, future human – and robotic – surgeons will be able to ensure that even complex tumours are completely removed at a microscopic level, improving outcomes and safety.

I’m curious about…“how robotics and imaging can together enable safer, more effective brain surgery”

Bio

Dr Stamatia Giannarou is a Royal Society University Research Fellow at the Hamlyn Centre for Robotic Surgery, Imperial College London. Matina holds an MEng in Electrical and Computer Engineering from Democritus University of Thrace, an MSc in communications and signal processing and a PhD in object recognition from the department of Electrical and Electronic Engineering, Imperial College London.

Research

Matina’s work spans robotics and computer science in order to develop better robotic vision for surgical navigation. Her fellowship focuses on two key questions:

  • How to provide enhanced vision to assist surgical navigation in Minimally Invasive Surgery?
  • How to accurately delineate tumour margins in robot-assisted neurosurgery?