Detecting Road Damages in Mobile Mapping Point Clouds using Competitive Reconstruction Networks
Keywords: mobile mapping, anomaly detection, neural networks, generative adversarial networks, LiDAR, 3D point clouds
Abstract. LiDAR scanning technology is an established method for capturing landscapes, buildings, or roads in order to create a so-called spatial digital twin of the reality, stored as a large collection of 3D coordinates called 3D point cloud. This spatial data offers high density and precision at the cost of hard to extract shape or object information. One popular application of LiDAR 3D point clouds is road condition quality exams. This task is challenging due to a lack of dedicated algorithms to extract and evaluate road point cloud features and due to the large variety of road damages. Deep learning approaches are very promising, but require extensive training data. The data and damage characteristics make data labeling a very difficult and tedious task that often results in mislabeled data, even when performed by trained human operators.
We propose a semi supervised generative adversarial network (GAN) based approach for labeling 2D images rendered from LiDAR point cloud data captured by mobile mapping vehicles, named Competitive Reconstruction Networks (CRN). Our solution trains multiple networks with the same architecture in an ”all vs all” fashion. Our method achieves reliable and robust results on two road image datasets as well as the MVTecAD dataset, and surpass comparable anomaly detection approaches in anomaly detection performance. We also implemented a data generation pipeline to render training images from 3D point cloud of roads and remap anomaly scores back to those 3D point clouds to use the full potential of the 3D data for further analysis.