skip to main content
research-article

The Blackbird UAV dataset

Published: 01 September 2020 Publication History

Abstract

This article describes the Blackbird unmanned aerial vehicle (UAV) Dataset, a large-scale suite of sensor data and corresponding ground truth from a custom-built quadrotor platform equipped with an inertial measurement unit (IMU), rotor tachometers, and virtual color, grayscale, and depth cameras. Motivated by the increasing demand for agile, autonomous operation of aerial vehicles, this dataset is designed to facilitate the development and evaluation of high-performance UAV perception algorithms. The dataset contains over 10 hours of data from our quadrotor tracing 18 different trajectories at varying maximum speeds (0.5 to 13.8 ms-1) through 5 different visual environments for a total of 176 unique flights. For each flight, we provide 120 Hz grayscale, 60 Hz RGB-D, and 60 Hz semantically segmented images from forward stereo and downward-facing photorealistic virtual cameras in addition to 100 Hz IMU, ~190 Hz motor speed sensors, and 360 Hz millimeter-accurate motion capture ground truth. The Blackbird UAV dataset is therefore well suited to the development of algorithms for visual inertial navigation, 3D reconstruction, and depth estimation. As a benchmark for future algorithms, the performance of two state-of-the-art visual odometry algorithms are reported and scripts for comparing against the benchmarks are included with the dataset. The dataset is available for download at http://blackbird-dataset.mit.edu/.

References

[1]
Akeley K (1993) Reality engine graphics. In: Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH ’93). New York: ACM Press, pp. 109–116.
[2]
AlphaPilot (2019) AlphaPilot – Lockheed Martin AI Drone Racing Innovation Challenge. Available at: https://www.herox.com/alphapilot (accessed 6 July 2019).
[3]
Antonini A, Guerra W, Murali V, Sayre-McCord T, and Karaman S (2018a) The Blackbird dataset: A large-scale dataset for UAV perception in aggressive flight. In: 2018 International Symposium on Experimental Robotics (ISER).
[4]
Antonini A, Leonard J, and Karaman S (2018b) Pre-Integrated Dynamics Factors and a Dynamical Agile Visual-Inertial Dataset for UAV Perception. Master’s Thesis, Massachusetts Institute of Technology. Available at: http://hdl.handle.net/1721.1/118667.
[5]
Burri M, Nikolic J, and Gohl P, et al. 2016) The EUROC micro aerial vehicle datasets. The International Journal of Robotics Research 35(10): 1157–1163.
[6]
Burri M, Oleynikova H, Achtelik MW, and Siegwart R (2015) Real-time visual–inertial mapping, re-localization and planning onboard MAVs in unknown environments. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
[7]
Coumans E (2015) Bullet physics simulation. In: ACM SIGGRAPH 2015 Courses. New York: ACM Press, p. 7.
[8]
Delmerico J, Cieslewski T, Rebecq H, Faessler M, and Scaramuzza D (2019) Are we ready for autonomous drone racing? The UZHFPV drone racing dataset. In: IEEE International Conference on Robotics and Automation (ICRA).
[9]
Falanga D, Mueggler E, Faessler M, and Scaramuzza D (2017) Aggressive quadrotor flight through narrow gaps with onboard sensing and computing using active vision. In: 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 5774–5781.
[10]
Furgale P, Rehder J, and Siegwart R (2013) Unified temporal and spatial calibration for multi-sensor systems. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1280–1286.
[11]
Furrer F, Burri M, Achtelik M, and Siegwart R (2016) Rotors – a modular gazebo MAV simulator framework. In: Robot Operating System (ROS). New York: Springer, pp. 595–625.
[12]
Gaidon A, Wang Q, Cabon Y, and Vig E (2016) Virtual worlds as proxy for multi-object tracking analysis. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4340–4349.
[13]
Geiger A, Lenz P, and Urtasun R (2012) Are we ready for autonomous driving? The KITTI vision benchmark suite. In: Conference on Computer Vision and Pattern Recognition (CVPR).
[14]
Guerra W, Tal E, Murali V, Ryou G, and Karaman S (2019a) Flightgoggles: Photorealistic sensor simulation for perception-driven robotics using photogrammetry and virtual reality. In: 2019 IEEE International Conference on Intelligent Robots and Systems (IROS).
[15]
Guerra W, Tal E, Murali V, Ryou G, and Karaman S (2019b) FlightGoggles: Photorealistic sensor simulation for perception-driven robotics using photogrammetry and virtual reality. CoRR abs/1905.11377.
[16]
Handa A, Whelan T, McDonald J, and Davison A (2014) A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM. In: IEEE International Conference on Robotics and Automation (ICRA), Hong Kong.
[17]
Huang AS, Olson E, and Moore DC (2010) LCM: Lightweight communications and marshalling. In: 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4057–4062.
[18]
Kaufmann E, Gehrig M, and Foehn P, et al. (2019) Beauty and the beast: Optimal methods meet learning for drone racing. In: 2019 International Conference on Robotics and Automation (ICRA). IEEE, pp. 690–696.
[19]
Koenig N and Howard A (2004) Design and use paradigms for Gazebo, an open-source multi-robot simulator. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vol. 3, pp. 2149–2154.
[20]
Korein J and Badler N (1983) Temporal anti-aliasing in computer generated animation. In: ACM SIGGRAPH Computer Graphics, Vol. 17. New York: ACM Press, pp. 377–388.
[21]
[22]
Lottes T (2009) FXAA. White paper, Nvidia, Febuary.
[23]
Majdik AL, Till C, and Scaramuzza D (2017) The Zurich Urban micro aerial vehicle dataset. The International Journal of Robotics Research 36(3): 269–273.
[24]
Mohta K, Watterson M, and Mulgaonkar Y, et al. (2018) Fast, autonomous flight in GPS-denied and cluttered environments. Journal of Field Robotics 35(1): 101–120.
[25]
Mueggler E, Gallego G, Rebecq H, and Scaramuzza D (2018) Continuous-time visual–inertial odometry for event cameras. IEEE Transactions on Robotics 34(6): 1425–1440.
[26]
Mur-Artal R, Montiel JMM, and Tardós JD (2015) ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Transactions on Robotics 31(5): 1147–1163.
[27]
Mur-Artal R and Tard JD (2017), ORB-SLAM2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Transactions on Robotics 33(5): 1255–1262.
[28]
Nisar B, Foehn P, Falanga D, and Scaramuzza D (2019) VIMO: Simultaneous visual inertial model-based odometry and force estimation. In: Proceedings of Robotics: Science and Systems, FreiburgimBreisgau, Germany.
[29]
Nisar B, Foehn P, Falanga D, and Scaramuzza D (2019) VIMO: Simultaneous visual inertial model-based odometry and force estimation. IEEE Robotics and Automation Letters 4(3): 2785–2792.
[30]
NVIDIA (2019) Deep Learning Super Sampling (DLSS). Available at: https://www.nvidia.com/en-us/geforce/news/graphics-reinvented-new-technologies-in-rtx-graphics-cards/#dlss (accessed 9 November 2019).
[31]
Qin T, Pan J, Cao S, and Shen S (2019) A general optimization-based framework for local odometry estimation with multiple sensors. CoRR abs/1901.03638.
[32]
Richter C, Bry A, and Roy N (2016) Polynomial trajectory planning for aggressive quadrotor flight in dense indoor environments. In: Robotics Research. New York: Springer, pp. 649–666.
[33]
Ros G, Sellart L, Materzynska J, Vazquez D, and Lopez AM (2016) The SYNTHIA dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3234–3243.
[34]
Savitzky A and Golay MJ (1964) Smoothing and differentiation of data by simplified least squares procedures. Analytical Chemistry 36(8): 1627–1639.
[35]
Sayre-McCord T, Guerra W, and Antonini A, et al. (2018) Visual–inertial navigation algorithm development using photorealistic camera simulation in the loop. In: 2018 IEEE International Conference on Robotics and Automation (ICRA).
[36]
Shah S, Dey D, Lovett C, and Kapoor A (2017) AirSim: High-fidelity visual and physical simulation for autonomous vehicles. In: Field and Service Robotics Conference.
[37]
SubT (2019) DARPA Subterranean Challenge. Available at: https://www.darpa.mil/program/darpa-subterranean-challenge (accessed 6 July 2019).
[38]
Sun K, Mohta K, and Pfrommer B, et al. (2018) Robust stereo visual inertial odometry for fast autonomous flight. IEEE Robotics and Automation Letters 3(2): 965–972.
[39]
Tal E and Karaman S (2018) Accurate tracking of aggressive quadrotor trajectories using incremental nonlinear dynamic inversion and differential flatness. In: Proceedings 57th IEEE Conference on Decision and Control. IEEE.
[40]
Vidal AR, Rebecq H, Horstschaefer T, and Scaramuzza D (2018) Ultimate SLAM? Combining events, images, and IMU for robust visual SLAM in HDR and high-speed scenarios. IEEE Robotics and Automation Letters 3(2): 994–1001.
[41]
Wang S, Bai M, and Mattyus G, et al. (2017) TorontoCity: Seeing the world with a million eyes. In: 2017 IEEE International Conference on Computer Vision (ICCV). IEEE, pp. 3028–3036.
[42]
Yang L, Nehab D, Sander PV, Sitthi-Amorn P, Lawrence J, and Hoppe H (2009) Amortized supersampling. ACM Transactions on Graphics 28(5): 135:1–135:12.

Cited By

View all
  • (2024)The INSANE datasetInternational Journal of Robotics Research10.1177/0278364924122724543:8(1083-1113)Online publication date: 1-Jul-2024
  • (2022)Coplanarity-Based Approach for Camera Motion Estimation Invariant to the Scene DepthOptical Memory and Neural Networks10.3103/S1060992X2205005831:Suppl 1(22-30)Online publication date: 1-Dec-2022
  • (2022)IBISCape: A Simulated Benchmark for multi-modal SLAM Systems Evaluation in Large-scale Dynamic EnvironmentsJournal of Intelligent and Robotic Systems10.1007/s10846-022-01753-7106:3Online publication date: 1-Nov-2022

Index Terms

  1. The Blackbird UAV dataset
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Information & Contributors

        Information

        Published In

        cover image International Journal of Robotics Research
        International Journal of Robotics Research  Volume 39, Issue 10-11
        Sep 2020
        167 pages

        Publisher

        Sage Publications, Inc.

        United States

        Publication History

        Published: 01 September 2020

        Author Tags

        1. Dataset
        2. quadrotor
        3. visual
        4. inertial
        5. ground truth
        6. localization
        7. RGBD
        8. benchmark
        9. semantic segmentation

        Qualifiers

        • Research-article

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • Downloads (Last 12 months)0
        • Downloads (Last 6 weeks)0
        Reflects downloads up to 21 Oct 2024

        Other Metrics

        Citations

        Cited By

        View all
        • (2024)The INSANE datasetInternational Journal of Robotics Research10.1177/0278364924122724543:8(1083-1113)Online publication date: 1-Jul-2024
        • (2022)Coplanarity-Based Approach for Camera Motion Estimation Invariant to the Scene DepthOptical Memory and Neural Networks10.3103/S1060992X2205005831:Suppl 1(22-30)Online publication date: 1-Dec-2022
        • (2022)IBISCape: A Simulated Benchmark for multi-modal SLAM Systems Evaluation in Large-scale Dynamic EnvironmentsJournal of Intelligent and Robotic Systems10.1007/s10846-022-01753-7106:3Online publication date: 1-Nov-2022

        View Options

        View options

        Get Access

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media