TQU-SLAM Benchmark Feature-based Dataset for Building Monocular VO
Received: 24 April 2024 | Revised: 12 May 2024 | Accepted: 26 May 2024 | Online: 2 August 2024
Corresponding author: Van-Hung Le
Abstract
This paper introduces the TQU-SLAM benchmark dataset, which includes 160,631 RGB-D frame pairs with the goal to be used in Dell Learning (DL) training of Visual SLAM and Visual Odometry (VO) construction models. It was collected from the corridors of three interconnected buildings with a length of about 230 m. The ground-truth data of the TQU-SLAM benchmark dataset, including the 6-DOF camera pose, 3D point cloud data, intrinsic parameters, and the transformation matrix between the camera coordinate system and the real world, were prepared manually. The TQU-SLAM benchmark dataset was tested based on the PySLAM framework with traditional features, such as SHI_TOMASI, SIFT, SURF, ORB, ORB2, AKAZE, KAZE, and BRISK and features extracted from DL LIKE VGG. Experiments were also conducted on DPVO for VO estimation. The camera pose estimation results were evaluated and presented in detail, while the challenges of the TQU-SLAM benchmark dataset were analyzed.
Keywords:
TQU-SLAM benchmark dataset, Visual Odometry, RGB-D images, 3D trajectory, Feature-based extractionDownloads
References
K. Wang, S. Ma, J. Chen, F. Ren, and J. Lu, "Approaches, Challenges, and Applications for Deep Visual Odometry: Toward Complicated and Emerging Areas," IEEE Transactions on Cognitive and Developmental Systems, vol. 14, no. 1, pp. 35–49, Mar. 2022.
A. Neyestani, F. Picariello, A. Basiri, P. Daponte, and L. D. Vito, "Survey and Research Challenges in Monocular Visual Odometry," in International Workshop on Metrology for Living Environment (MetroLivEnv), Milano, Italy, Dec. 2023, pp. 107–112.
L. R. Agostinho, N. M. Ricardo, M. I. Pereira, A. Hiolle, and A. M. Pinto, "A Practical Survey on Visual Odometry for Autonomous Driving in Challenging Scenarios and Conditions," IEEE Access, vol. 10, pp. 72182–72205, Jan. 2022.
M. A. Haq, A. K. Jilani, and P. Prabu, "Deep learning based modeling of groundwater storage change," Computers, Materials and Continua, vol. 70, no. 3, pp. 4599–4617, 2022.
M. A. Haq, "CDLSTM: A novel model for climate change forecasting," Computers, Materials and Continua, vol. 71, no. 2, pp. 2363–2381, 2022.
M. A. Haq, "Smotednn: A novel model for air pollution forecasting and aqi classification," Computers, Materials and Continua, vol. 71, no. 1, pp. 1403–1425, 2022.
M. A. Haq, "Planetscope Nanosatellites Image Classification Using Machine Learning," Computer Systems Science and Engineering, vol. 42, no. 3, pp. 1031–1046, Feb. 2022.
M. A. Haq, "CNN Based Automated Weed Detection System Using UAV Imagery," Computer Systems Science and Engineering, vol. 42, no. 2, pp. 837–849, Jan. 2022.
M. A. Haq, G. Rahaman, P. Baral, and A. Ghosh, "Deep Learning Based Supervised Image Classification Using UAV Images for Forest Areas Classification," Journal of the Indian Society of Remote Sensing, vol. 49, no. 3, pp. 601–606, Mar. 2021.
D. G. Lowe, "Distinctive Image Features from Scale-Invariant Keypoints," International Journal of Computer Vision, vol. 60, no. 2, pp. 91–110, Nov. 2004.
H. Bay, T. Tuytelaars, and L. Van Gool, "SURF: Speeded Up Robust Features," in European Conference on Computer Vision, Graz, Austria, Dec. 2006, pp. 404–417.
E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, "ORB: An efficient alternative to SIFT or SURF," in International Conference on Computer Vision, Barcelona, Spain, Nov. 2011, pp. 2564–2571.
S. Leutenegger, M. Chli, and R. Y. Siegwart, "BRISK: Binary Robust invariant scalable keypoints," in International Conference on Computer Vision, Barcelona, Spain, Nov. 2011, pp. 2548–2555.
B. D. Lucas and T. Kanade, "An Iterative Image Registration Technique with an Application to Stereo Vision," in 7th International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, Aug. 1981, vol. 2, pp. 674–679.
A. Geiger, P. Lenz, and R. Urtasun, "Are we ready for autonomous driving? The KITTI vision benchmark suite," in IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, Jun. 2012, pp. 3354–3361.
A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, "Vision meets robotics: The KITTI dataset," The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, Sep. 2013.
M. Menze and A. Geiger, "Object Scene Flow for Autonomous Vehicles," in IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, Jun. 2015, pp. 3061–3070.
J. Sturm, N. Engelhard, F. Endres, W. Burgard, and D. Cremers, "A benchmark for the evaluation of RGB-D SLAM systems," in IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, Oct. 2012, pp. 573–580.
N. Silberman, D. Hoiem, P. Kohli, and R. Fergus, "Indoor Segmentation and Support Inference from RGBD Images," in European Conference on Computer Vision, Florence, Italy, Oct. 2012, pp. 746–760.
A. Handa, T. Whelan, J. McDonald, and A. J. Davison, "A benchmark for RGB-D visual odometry, 3D reconstruction and SLAM," in IEEE International Conference on Robotics and Automation, Hong Kong, China, Jun. 2014, pp. 1524–1531.
L. M. Hodne, E. Leikvoll, M. Yip, A. L. Teigen, A. Stahl, and R. Mester, "Detecting and Suppressing Marine Snow for Underwater Visual SLAM," in IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, Jun. 2022, pp. 5101–5109.
R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, "ORB-SLAM: A Versatile and Accurate Monocular SLAM System," IEEE Transactions on Robotics, vol. 5, no. 31, pp. 1147–1163, 2015.
R. Mur-Artal and J. D. Tardos, "ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras," IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255–1262, Oct. 2017.
P. F. Alcantarilla, A. Bartoli, and A. J. Davison, "KAZE Features," in European Conference on Computer Vision, Florence, Italy, Oct. 2012, pp. 214–227.
K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition." arXiv, Apr. 10, 2015.
V. H. Le, "DATA_TQU_SLAM." [Online]. Available: https://drive.google.com/drive/folders/16Dx_nORUvUHFg2BU9mm8aBYMvtAzE9m7.
L. Freda, "luigifreda/pyslam." Jun. 03, 2024, [Online]. Available: https://github.com/luigifreda/pyslam.
"Bài 3: Linear Regression," Machine Learning cơ bản. https://machinelearningcoban.com/2016/12/28/linearregression/.
M. Dusmanu et al., "D2-Net: A Trainable CNN for Joint Detection and Description of Local Features," in IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, Jun. 2019. 9
F. Fraundorfer and D. Scaramuzza, "Visual Odometry : Part II: Matching, Robustness, Optimization, and Applications," IEEE Robotics & Automation Magazine, vol. 19, no. 2, pp. 78–90, Jun. 2012.
V.-H. Le, H. Vu, T. T. Nguyen, T.-L. Le, and T.-H. Tran, "Acquiring qualified samples for RANSAC using geometrical constraints," Pattern Recognition Letters, vol. 102, pp. 58–66, Jan. 2018.
Z. Teed, L. Lipson, and J. Deng, "Deep Patch Visual Odometry," Advances in Neural Information Processing Systems, vol. 36, pp. 39033–39051, Dec. 2023.
A. Alayed, R. Alidrisi, E. Feras, S. Aboukozzana, and A. Alomayri, "Real-Time Inspection of Fire Safety Equipment using Computer Vision and Deep Learning," Engineering, Technology & Applied Science Research, vol. 14, no. 2, pp. 13290–13298, Apr. 2024.
A. N. Sazaly, M. F. M. Ariff, and A. F. Razali, "3D Indoor Crime Scene Reconstruction from Micro UAV Photogrammetry Technique," Engineering, Technology & Applied Science Research, vol. 13, no. 6, pp. 12020–12025, Dec. 2023.
M. Ramzan, M. S. Farooq, A. Zamir, W. Akhtar, M. Ilyas, and H. U. Khan, "An Analysis of Issues for Adoption of Cloud Computing in Telecom Industries," Engineering, Technology & Applied Science Research, vol. 8, no. 4, pp. 3157–3161, Aug. 2018.
Downloads
How to Cite
License
Copyright (c) 2024 Van-Hung Le, Huu-Son Do, Van-Nam Phan, Trung-Hieu Te
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain the copyright and grant the journal the right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) after its publication in ETASR with an acknowledgement of its initial publication in this journal.