PC-VINS-Mono: A Robust Mono Visual-Inertial Odometry with Photometric Calibration

Journal: Journal of Autonomous Intelligence DOI: 10.32629/jai.v1i2.33

Yao Xiao1, Xiaogang Ruan2, Xiaoqing Zhu2

1. 1 Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
2.

Abstract

Feature detection and Tracking, which heavily rely on the gray value information of images, is a very importance procedure for Visual-Inertial Odometry (VIO) and the tracking results significantly affect the accuracy of the estimation results and the robustness of VIO. In high contrast lighting condition environment, images captured by auto exposure camera shows frequently change with its exposure time. As a result, the gray value of the same feature in the image show vary from frame to frame, which poses large challenge to the feature detection and tracking procedure. Moreover, this problem further been aggravated by the nonlinear camera response function and lens attenuation. However, very few VIO methods take full advantage of photometric camera calibration and discuss the influence of photometric calibration to the VIO. In this paper, we proposed a robust monocular visual-inertial odometry, PC-VINS-Mono, which can be understood as an extension of the opens-source VIO pipeline, VINS-Mono, with the capability of photometric calibration. We evaluate the proposed algorithm with the public dataset. Experimental results show that, with photometric calibration, our algorithm achieves better performance comparing to the VINS-Mono. 

Keywords

Photometric Calibration; Visual-Inertial Odometry; Simultaneous Localization and Mapping; Robot Navigation

References

[1] M. Li and A. I. Mourikis, “High-precision, consistent EKF-based visual–inertial odometry,” Int. J. Rob. Res., vol. 32, no. 6, pp. 690–711, 2013.
[2] A. I. Mourikis and S. I. Roumeliotis, “A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation,” in Proceedings 2007 IEEE International Conference on Robotics and Automation, 2007, no. April, pp. 3565–3572.
[3] S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale, “Keyframe-based visual–inertial odometry using nonlinear optimization,” Int. J. Rob. Res., vol. 34, no. 3, pp. 314–334, 2015.
[4] S. Leutenegger, P. Furgale, V. Rabaud, M. Chli, K. Konolige, and R. Siegwart, “Keyframe-Based Visual-Inertial SLAM using Nonlinear Optimization,” in Robotics: Science and Systems IX, 2013, vol. 34, no. 3, pp. 314–334.
[5] K. Sun et al., “Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight,” pp. 1–8, 2017.
[6] T. Qin, P. Li, and S. Shen, “VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator,” IEEE Trans. Robot., vol. 34, no. 4, pp. 1004–1020, Aug. 2018.
[7] R. Mur-Artal and J. D. Tardos, “Visual-Inertial Monocular SLAM With Map Reuse,” IEEE Robot. Autom. Lett., vol. 2, no. 2, pp. 796–803, Apr. 2017.
[8] C. Forster, Z. Zhang, M. Gassner, M. Werlberger, and D. Scaramuzza, “SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems,” IEEE Trans. Robot., vol. 33, no. 2, pp. 249–265, Apr. 2017.
[9] C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014, pp. 15–22.
[10] C. Forster, L. Carlone, F. Dellaert, and D. Scaramuzza, “On-Manifold Preintegration for Real-Time Visual--Inertial Odometry,” IEEE Trans. Robot., vol. 33, no. 1, pp. 1–21, Feb. 2017.
[11] L. von Stumberg, V. Usenko, and D. Cremers, “Direct Sparse Visual-Inertial Odometry using Dynamic Marginalization,” 2018.
[12] V. Usenko, J. Engel, J. Stuckler, and D. Cremers, “Direct visual-inertial odometry with stereo cameras,” Proc. - IEEE Int. Conf. Robot. Autom., vol. 2016–June, pp. 1885–1892, 2016.
[13] J. Engel, V. Koltun, and D. Cremers, “Direct Sparse Odometry,” arXiv:1607.02565, no. October, pp. 1–17, 2016.
[14] J. Engel, V. Usenko, and D. Cremers, “A photometrically calibrated benchmark for monocular visual odometry,” arXiv Prepr. arXiv1607.02555, 2016.
[15] M. Burri et al., “The EuRoC micro aerial vehicle datasets,” Int. J. Rob. Res., vol. 35, no. 10, pp. 1157–1163, Sep. 2016.
[16] N. Yang, R. Wang, X. Gao, and D. Cremers, “Challenges in Monocular Visual Odometry: Photometric Calibration, Motion Bias, and Rolling Shutter Effect,” IEEE Robot. Autom. Lett., vol. 3, no. 4, pp. 2878–2885, Oct. 2018.
[17] R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras,” IEEE Trans. Robot., vol. 33, no. 5, pp. 1255–1262, Oct. 2017.
[18] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, “ORB-SLAM: A Versatile and Accurate Monocular SLAM System,” IEEE Trans. Robot., vol. 31, no. 5, pp. 1147–1163, Oct. 2015.
[19] D. Schubert, T. Goll, N. Demmel, V. Usenko, J. Stückler, and D. Cremers, “The TUM VI Benchmark for Evaluating Visual-Inertial Odometry,” 2018.
[20] Jianbo Shi and Tomasi, “Good features to track,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition CVPR-94, 1994, pp. 593–600.
[21] S. Baker and I. Matthews, “Lucas-Kanade 20 Years On: A Unifying Framework,” Int. J. Comput. Vis., vol. 56, no. 3, pp. 221–255, Feb. 2004.
[22] J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, “A benchmark for the evaluation of RGB-D SLAM systems,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 573–580.
[23] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The KITTI vision benchmark suite,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 3354–3361.
[24] Z. Zhang and D. Scaramuzza, “A Tutorial on Quantitative Trajectory Evaluation for Visual(-Inertial) Odometry,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018.

Copyright © 2019 Yao Xiao, Xiaogang Ruan, Xiaoqing Zhu

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License