Back to Top

The Explorer System

K. Sjöö, H. Zender, P. Jensfelt, G.-J. Kruijff, A. Pronobis, N. Hawes, M. Brenner

In Cognitive Systems (H. Christensen, G.-J. Kruijff, J. Wyatt, eds.), volume 8 of Cognitive Systems Monographs, Springer Berlin Heidelberg, 2010.


In the Explorer scenario we deal with the problems of modeling space, acting in this space and reasoning about it. Spatial models are built using input from sensors such as laser scanners and cameras but equally importantly also based on human input. It is this combination that enables the creation of a spatial model that can support low level tasks such as navigation, as well as interaction. Even combined, the inputs only provide a partial description of the world. By combining this knowledge with a reasoning system and a common sense ontology, further information can be inferred to make the description of the world more complete. Unlike the PlayMate system, all the information that is needed to build the spatial models are not available to it sensors at all times. The Explorer need to move around, i.e. explorer space, to gather information and integrate this into the spatial models. Two main modes for this exploration of space have been investigated within the Explorer scenario. In the first mode the robot explores space together with a user in a home tour fashion. That is, the user shows the robot around their shared environment. This is what we call the Human Augmented Mapping paradigm. The second mode is fully autonomous exploration where the robot moves with the purpose of covering space. In practice the two modes would both be used interchangeably to get the best trade-off between autonomy, shared representation and speed. The focus in the Explorer is not on performing a particular task to perfection, but rather acting within a flexible framework that alleviates the need for scripting and hardwiring. We want to investigate two problems within this context: what information must be exchanged by different parts of the system to make this possible, and how the current state of the world should be represented during such exchanges. One particular interaction which encompasses a lot of the aforementioned issues is giving the robot the ability to talk about space. This interaction raises questions such as: how can we design models that allow the robot and human to talk about where things are, and how do we link the dialogue and the mapping systems?


  author =       {Sj\"{o}\"{o}, Kristoffer and Zender, Hendrik and Jensfelt, Patric and Kruijff, Geert-Jan M. and Pronobis, Andrzej and Hawes, Nick and Brenner, Michael},
  title =        {The {E}xplorer System},
  booktitle =    {Cognitive Systems},
  series =       {Cognitive Systems Monographs},
  editor =       {Christensen, Henrik I. and Kruijff, Geert-Jan M. and Wyatt, Jeremy L.},
  year =         2010,
  pages =        {395-421},
  volume =       8,
  publisher =    {Springer Berlin Heidelberg},
  isbn =         {978-3-642-11694-0},
  doi =          {10.1007/978-3-642-11694-0_10},
  url =          {}
© 2018. Copyright Andrzej Pronobis
  • stackoverflow
  • scholar