3D Visualisation from digital images could help maintain the London Underground
Since 1863 when the first London Underground line was opened, engineers have inspected its infrastructure by walking tunnels at night with a notepad, making notes of any cracks. Now innovative technology, developed in collaboration between the University of Cambridge Engineering Department and Toshiba’s Cambridge Research Laboratory, allows 3D reconstruction of the tunnels, making it possible to automate this process and prioritise repair work.
The Computer Vision Group of Toshiba Research Europe’s Cambridge Research Laboratory and the University of Cambridge have teamed up to develop unique software that translates still photographs, taken from different angles with any camera, into a 3D model of the inside of the tunnels.
The project idea came from a day the team spent walking the tunnel network underneath Tokyo. Professor Roberto Cipolla from the Cambridge University Engineering Department recalls: “The engineers walk along the tunnels with a camera and a notebook taking photos of any defects and making detailed notes. In six months time they do it all again, using just their photos and notes to relocate the cracks and see if damage had increased. They have hundreds of kilometres to cover – it is an exhausting job.”
The acquisition of 3D models of buildings and infrastructure is becoming increasingly common. While laser-based methods are state-of-the-art in terms of accuracy, they typically require a long capture time and are still costly. With continuously improving image quality and novel algorithms, computer vision provides a practical solution that captures colour information as well as geometry.
Dr Shuichi Uchikoga, Deputy Managing Director of Toshiba’s Cambridge Research Laboratory, says: “Our technology allows information about the world that humans gain from sight, such as colour, texture, and size, to be reproduced very realistically. As it only requires images captured on very basic mobile devices to recreate accurate interpretations we anticipate that it will be applicable to a wide range of uses.”
Written by Rachel Holdsworth