KeywordsComputer Science - Robotics
Full recordShow full item record
AbstractFor intelligent robots to interact in deeply meaningful ways with their environment, they must understand both the geometric and semantic properties of the scene surrounding them. The majority of research to date has addressed these mapping challenges separately, focusing on either geometric or semantic mapping. In this paper we address the problem of building environmental maps that include both semantically meaningful, object-level entities and point- or mesh-based geometrical representations. We simultaneously build geometric point cloud models of previously unseen instances of known object classes and create a map that contains these object models as central entities. Our system leverages sparse, feature-based RGB-D SLAM, image-based deep-learning object detection and 3D unsupervised segmentation. We demonstrate the efficacy of our approach through quantitative evaluation in an automated inventory management task using a new real-world dataset recorded over a building office floor.