Deep Generative Spatial Models for Mobile Robots


In this project, we developed a new probabilistic framework that allows mobile robots to autonomously learn deep generative models of their environments that span multiple levels of abstraction. Unlike traditional approaches that integrate engineered models for low-level features, geometry, and semantic information, our approach leverages recent advances in Sum-Product Networks (SPNs) and deep learning to learn a generative model of a robot's spatial environment, from low-level input to semantic interpretations.

The model is capable of solving a wide range of tasks from semantic classification of places, uncertainty estimation and novelty detection to generation of place appearances based on semantic information and prediction of missing data in partial observations.

Experiments on laser range data from a mobile robot show that the proposed single universal model obtains accuracy or efficiency superior to models fine-tuned to specific sub-problems, such as Generative Adversarial Networks (GANs) or SVMs.

Related Publications