Deep Generative Spatial Models for Mobile Robots


In this project, we developed a new probabilistic framework that allows mobile robots to autonomously learn deep generative models of their environments that span multiple levels of abstraction. Unlike traditional approaches that attempt to integrate separately engineered components for low-level features, geometry, and semantic representations, our approach leverages recent advances in sum-product networks (SPNs) and deep learning to learn a unified deep model of a robot's spatial environment, from low-level representations to semantic interpretations.

The model can learn the geometry of places and is a versatile platform for solving tasks ranging from semantic classification of places, uncertainty estimation and novelty detection to generation of place appearances based on semantic information and prediction of missing data in partial observations.

Related Publications