flobot_logo.png    tuw_logo.png    uol_logo.png    lcas_logo.png
FLOBOT Perception Dataset

Collected by our very own FLOBOT (FLOor washing RObot)

flobot.jpg

Description

This dataset was collected with FLOBOT - an advanced autonomous floor scrubber - includes data from four different sensors for environment perception, as well as the robot pose in the world reference frame. Specifically, FLOBOT relies on a 3D lidar and a RGB-D camera for human detection and tracking, and a second RGB-D and a stereo camera for dirt and object detection. Data collection was performed in four public places (three of them are released in this dataset), two in Italy and two in France, in FOLBOT working mode with the corresponding testing procedures for final project validation. For a quick overview, please refer to the following video.

Contributions

  1. Robot Operating System (ROS) rosbag files recorded from four sensors including a 3D lidar, two RGB-D cameras and a stereo camera, and the robot pose in the world reference frame are provided. All the sensory data are synchronized at the software level (i.e. time stamped by ROS).
  2. Data collection was carried out with the real FLOBOT prototype, in real environments including airport, warehouse and supermarket. While these public places are rarely to obtain permission to perform data collection especially with robots.
  3. Annotation of pedestrian in 3D lidar, dirt and object in RGB-D camera are provided.
  4. Although not our main use, since the dataset provides also robot pose in the world reference frame, it can be used for localization and mapping problems. Moreover, as our data involves very characteristic public scenarios (i.e. airport, warehouse and supermarket), it is also suitable for semantic and contextual study.

Citation

If you publish work based on, or using, this dataset or software, we would appreciate citations to the following:

@article{zhimon2020jist,
   author = {Zhi Yan and Simon Schreiberhuber and Georg Halmetschlager and Tom Duckett and Markus Vincze and Nicola Bellotto},
   title = {Robot Perception of Static and Dynamic Objects with an Autonomous Floor Scrubber},
   journal = {Intelligent Service Robotics},
   volume = {13},
   number = {3},
   pages = {403--417},
   year = {2020}
}

Recording platform

flobot_sensors.jpg
  1. Velodyne VLP-16 3D lidar
  2. Xtion PRO LIVE RGB-D camera (forward-facing for human detection)
  3. Xtion PRO LIVE RGB-D camera (floor-facing for dirt and object detection)
  4. ZED stereo camera (floor-facing for dirt and object detection)
  5. SICK S300 2D lidar
  6. OEM incremental measuring wheel encoder
  7. Xsens MTi-30 IMU (inside of the robot)

Recording environments

Four pilot sites were selected for the final FLOBOT validation, which led to this dataset (with three of them). These pilot sites descriptions are important in order to understand the requirements for each use case and accordingly design the FLOBOT robot and complete system.

airport.jpg warehouse.jpg supermarket.jpg
Airport Warehouse Supermarket

dataset_locations.jpg

Downloads

Date Time (GMT+2) Place (Europe) Sensors Main purposes Downloads
2018-04-19 11:41-11:49 (8:24s) Carugate (supermarket) Velodyne Human detection and tracking supermarket-2018-04-19-11-41-21-velodyne-only.bag*
2018-05-31 16:35-16:39 (3:44s) Carugate (supermarket) Velodyne / forward-facing Xtion (depth) Human detection and tracking supermarket-2018-05-31-16-35-33.bag (labels)
2018-06-12 17:10-17:13 (3:27s) Lyon (warehouse) Velodyne / forward-facing Xtion (depth) Human detection and tracking warehouse-2018-06-12-17-10-22.bag (labels)
2018-06-13 16:11-16:17 (5:05s) Lyon (airport) Velodyne / forward-facing Xtion (depth) Human detection and tracking airport-2018-06-13-16-11-56.bag (labels)
2018-06-13 16:20-16:23 (2:26s) Lyon (airport) Velodyne / forward-facing Xtion (depth) Human detection and tracking airport-2018-06-13-16-20-34.bag
2018-06-13 16:37-16:42 (4:28s) Lyon (airport) Velodyne / forward-facing Xtion (depth) Human detection and tracking airport-2018-06-13-16-37-32.bag
2018-XX-XX N/A Carugate (supermarket) floor-facing Xtion Dirt and object detection carugate_annotated.zip
2018-06-XX N/A Lyon (warehouse & airport) floor-facing Xtion Dirt and object detection lyon_annotated.zip

*convert ROS bags to Point Cloud Data (PCD) file format: http://wiki.ros.org/pcl_ros#bag_to_pcd

How to play

roslaunch flobot_dataset_play.launch bag:=path_to_your_rosbag (rviz config here

Open source

Related publications

  1. Zhi Yan, Tom Duckett, and Nicola Bellotto. Online learning for 3D LiDAR-based human detection: Experimental analysis of point cloud clustering and classification methods. Autonomous Robots, 2019. [BibTeX | PDF]
  2. Simon Schreiberhuber, Johann Prankl, Timothy Patten, and Markus Vincze. ScalableFusion: High-resolution mesh-based real-time 3D reconstruction. In Proceedings of the 2019 IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 2019. [PDF]
  3. Georg Halmetschlaeger-Funek, Markus Suchi, Martin Kampel, and Markus Vincze. An empirical evaluation of ten depth cameras: Bias, precision, lateral noise, different lighting conditions and materials, and multiple sensor setups in indoor environments. In IEEE Robotics & Automation Magazine, 2019. [PDF | Dataset]
  4. Zhi Yan, Li Sun, Tom Duckett, and Nicola Bellotto. Multisensor online transfer learning for 3D LiDAR-based human detection with a mobile robot. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 2018. [BibTeX | PDF | Code | Dataset]
  5. Georg Halmetschlaeger-Funek, Johann Prankl, and Markus Vincze. Towards autonomous auto calibration of unregistered RGB-D setups: The benefit of plane priors. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, October 2018. [PDF]
  6. Li Sun, Zhi Yan, Sergi Molina Mellado, Marc Hanheide, and Tom Duckett. 3DOF pedestrian trajectory prediction learned from long-term autonomous mobile robot deployment data. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, May 2018. [BibTeX | PDF | Dataset | Video]
  7. Farhoud Malekghasemi, Georg Halmetschlaeger-Funek, and Markus Vincze. Autonomous extrinsic calibration of a depth sensing camera on mobile robots. In Proceedings of the Austrian Robotics Workshop (ARW), Innsbruck, Austria, May 2018 [PDF]
  8. Zhi Yan, Tom Duckett, and Nicola Bellotto. Online learning for human classification in 3D LiDAR-based tracking. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 864-871, Vancouver, Canada, September 2017. [BibTeX | PDF | Code | Dataset | Video1 | Video2]
  9. Andreas Grünauer, Georg Halmetschlaeger-Funek, Johann Prankl, and Markus Vincze. The power of GMMs: Unsupervised dirt spot detection for industrial floor cleaning robots. In Proceedings of the Towards Autonomous Robotic Systems - 18th Annual Conference (TAROS), Guildford, UK, July 2017. [PDF | Dataset]
  10. Simon Schreiberhuber, Thomas Mörwald, and Markus Vincze. Bilateral filters for quick 2.5D plane segmentation. In Proceedings of the OAGM&ARW Joint Workshop, Vienna, Austria, May 2017. [PDF]

License

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright (c) 2019 Zhi Yan, Simon Schreiberhuber, Georg Halmetschlager, Tom Duckett, Markus Vincze, Nicola Bellotto.

Funding

euflag.png
This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 645376 (FLOBOT).