Back to Top

Combining World and Interaction Models for Human-Robot Collaborations

C. Matuszek*, A. Pronobis*, L. Zettlemoyer, D. Fox

In AAAI 2013 Workshop on Intelligent Robotic Systems, 2013.

About

As robotic technologies mature, we can imagine an increasing number of applications in which robots could soon prove to be useful in unstructured human environments. Many of those applications require a natural interface between the robot and untrained human users or are possible only in a human-robot collaborative scenario. In this paper, we study an example of such scenario in which a visually impaired person and a robotic ``guide'' collaborate in an unfamiliar environment. We then analyze how the scenario can be realized through language- and gesture-based human-robot interaction, combined with semantic spatial understanding and reasoning, and propose an integration of semantic world model with language and gesture models for several collaboration modes. We believe that this way practical robotic applications can be achieved in human environments with the use of currently available technology.

BibTeX

@inproceedings{matuszek2013aaai-irs,
  author =       {Matuszek*, Cynthia and Pronobis*, Andrzej and Zettlemoyer, Luke and Fox, Dieter},
  title =        {Combining World and Interaction Models for Human-Robot Collaborations},
  booktitle =    {AAAI 2013 Workshop on Intelligent Robotic Systems},
  year =         2013,
  address =      {Bellevue, WA, USA},
  month =        jul,
  url =          {http://www.pronobis.pro/publications/matuszek2013aaai-irs}
}
© 2018. Copyright Andrzej Pronobis
  • stackoverflow
  • scholar