Fusion VIO: An Extendable Sensor Fusion Framework for Unmaned Vehicles
Journal: Architecture Engineering and Science DOI: 10.32629/aes.v3i2.894
Abstract
Real-time perception based on the simultaneous localization and mapping (SLAM) technology is of immense application potential in the industry filed. In this work, we provide an extended Kalman filter (EKF) based visual SLAM framework named Fusion VIO, which contains HDR imaging system, hardware synchronization, object detection, filter based sensor fusion, and semantic fusion. Notably, Fusion VIO well fuses loop closure information, which takes effects in solving the error problem during operation of the robot and improving the robustness of the system. Besides, experimental validation shows that Fusion VIO not only maintains high accuracy but also cost one-third resources compared to the common VINS fusion. Our work demonstrates a novel sensor fusion framework for unmaned vehicles.
Keywords
time synchronization, SLAM, sensor fusion
Full Text
PDF - Viewed/Downloaded: 7 TimesReferences
[2]Faessler M, Fontana F, Forster C, et al. Autonomous, vision based flight and live dense 3D mapping with a quadrotor microaerial vehicle[J]. Journal of Field Robotics, 2016, 33(4): 431-450.
[3]Stefan Leutenegger, Simon Lynen, Michael Bosse, Roland Siegwart and Paul Timothy Furgale. Keyframe-based visual–inertial odometry using nonlinear optimization. The International Journal of Robotics Research, 2015.
[4]S. Leutenegger, M. Chli and R. Y. Siegwart, "BRISK: Binary Robust invariant scalable keypoints," 2011 International Conference on Computer Vision, 2011, pp. 2548-2555, doi: 10.1109/ICCV.2011.6126542.
[5]Campos C, Elvira R, Juan J. Gómez Rodríguez, et al. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM[J]. 2020.
[6]A multi-state constraint Kalman filter for vision-aided inertial navigation[C] IEEE International Conference on Robotics and Automation. Piscataway, USA:IEEE, 2007: 3565-3572.
[7]T. Schneider and M. T. Dymczyk and M. Fehr and K. Egger and S. Lynen and I. Gilitschenski and R. Siegwart. Maplab: An Open Framework for Research in Visual-inertial Mapping and Localization. IEEE Robotics and Automation Letters, 2018.
[8]Bloesch, Michael; Omari, Sammy; Hutter, Marco; Siegwart, Roland, ROVIOLI - Robust Visual Inertial Odometry with Localization Integration, 2015.
[9]Geneva P, Eckenhoff K, Lee W, et al. OpenVINS: A Research Platform for Visual-Inertial Estimation[C] Proc. of the IEEE International Conference on Robotics and Automation. IEEE, 2020.
[10]S. Q. Wu, Z. G. Li, J. H. Zheng, and Z. J. Zhu, Exposure-Robust Alignment of Differently Exposed Images, IEEE Signal Processing Letters, Vol. 21, No. 7, pp. 885-889, Jul. 2014.
[11]J. Jiang, Z. G. Li, S. L. Xie, S. Q. Wu, and L. C. Zeng, Robust Alignment of Multi-Exposed Images With Saturated Regions, IEEE Access, Vol. 8, pp. 221689-221699, Dec. 2020.
[12]C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, On-Manifold Preintegration for Real-Time Visual-Inertial Odometry, IEEE Trans. Robot., vol. 33, no. 1, Feb. 2016.
[13]J. Henawy, Z. G. Li, W. Y. Yau, and G. Seet, Accurate IMU Factor Using Switched Linear Systems for VIO, On Industrial Electronics, Vol. 68, No. 8, pp. 7199-7208, Aug. 2021.
[14]Tong Qin, Peiliang Li, Zhenfei Yang, Shaojie Shen, VINS-Mono A Robust and Versatile Monocular Visual-Inertial State Estimator, 2017.
Copyright © 2022 Ruonan Guo, Mengyu Hu, Chang Shao, Jiayang Zhao
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License