基于异构数据融合的SLAM研究综述
作者单位:

1.自动化学院;2.南京信息工程大学


Overview of SLAM Research Based on Heterogeneous Data Fusion
Author:
Affiliation:

1.Nanjing university of information science&2.technology

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献
  • | |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    激光与视觉SLAM技术经过几十年的发展,目前都已经较为成熟,并被广泛应用于军事和民用领域。该技术使得机器人能够在没有GPS的条件下,在室内外场景中移动。然而仅依靠单一传感器的SLAM技术都有着各自的局限性,如激光SLAM不适用于周围存在大量动态物体的场景,视觉SLAM在低纹理环境中鲁棒性差,但两者融合使用具有巨大的取长补短的潜力。因此,本文预测激光与视觉甚至是更多传感器融合的SLAM技术将会是未来的主流方向。本文回顾了SLAM技术的发展历程,分析了激光雷达与视觉的硬件信息,给出了一些经典的开源算法与数据集。根据融合传感器所使用的算法,从传统基于不确定度、基于特征以及新颖的基于深度学习的角度详细介绍了多传感器融合方案,概述了多传感器融合方案在复杂场景中的优异性能,并对未来发展做出展望。

    Abstract:

    After decades of development, laser and visual SLAM technology has been relatively mature and widely used in military and civil fields. The technology allows robots to move in indoor and outdoor scenarios where GPS signals are scarce. However, SLAM technology relying only on a single sensor has its own limitations. For example,laser SLAM is not suitable for scenes with a large number of dynamic objects around it, and visual SLAM has poor robustness in low-texture environments. However, the fusion of the two technologies has great potential to learn from each other. Therefore, this paper predicts that SLAM technology combining laser and vision or even more sensors will be the mainstream direction in the future. This paper reviews the development of SLAM technology, analyzes the hardware information of LIDAR and camera, and gives some classical open-source algorithms and datasets. According to the algorithm used in the fusion sensor, the multi-sensor fusion scheme is introduced in detail from the perspective of traditional uncertainty based, feature-based and novel deep learning based. The excellent performance of multi-sensor fusion scheme in complex scenes is summarized, and the future development of the multi-sensor fusion scheme is prospected.

    参考文献
    [1] 1 赵乐文, 任嘉倩, 丁杨: ‘基于GNSS的空间环境参数反演平台及精度评估’, 南京信息工程大学学报(自然科学版), 2021, 13, (02), pp. 204-210
    [2] 2 Jiang, G.L., Yin, L., Jin, S.K., Tian, C.R., Ma, X.B., and Ou, Y.S.: ‘A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low-Cost LiDAR and Vision Fusion’, APPLIED SCIENCES-BASEL, 2019, 9, (10)
    [3] 3 Smith, R.C., and Cheeseman, P.: ‘On the Representation and Estimation of Spatial Uncertainty’, The International Journal of Robotics Research, 1986, 5, (4), pp. 56-68
    [4] 4 王霞, 左一凡: ‘视觉SLAM研究进展’, 智能系统学报, 2020, 15, (05), pp. 825-834
    [5] 5 胡凯, 吴佳胜, 郑翡, 张彦雯, 陈雪超, 鹿奔: ‘视觉里程计研究综述’, 南京信息工程大学学报(自然科学版), 2021, 13, (03), pp. 269-280
    [6] 6 危双丰, 庞帆, 刘振彬, 师现杰: ‘基于激光雷达的同时定位与地图构建方法综述’, 计算机应用研究, 2020, 37, (02), pp. 327-332
    [7] 7 Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., Reid, I., and Leonard, J.J.: ‘Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age’, IEEE Transactions on Robotics, 2016, 32, (6), pp. 1309-1332
    [8] 8 Neira, J., and Tardós, J.D.: ‘Data association in stochastic mapping using the joint compatibility test’, IEEE Trans. Robotics Autom., 2001, 17, pp. 890-897
    [9] 9 Julier, S.J., and Uhlmann, J.K.: ‘A counter example to the theory of simultaneous localization and map building’, Proceedings 2001 ICRA. IEEE International Conference on Robotics and Automation (Cat. No.01CH37164), 2001, 4, pp. 4238-4243 vol.4234
    [10] 10 Aulinas, J., Pétillot, Y.R., Salvi, J., and Lladó, X.: ‘The SLAM problem: a survey’, in Editor (Ed.)^(Eds.): ‘Book The SLAM problem: a survey’ (2008, edn.), pp.
    [11] 11 尹姝, 陈元橼, 仇翔: ‘基于RFID和自适应卡尔曼滤波的室内移动目标定位方法’, 南京信息工程大学学报(自然科学版), 2018, 10, (06), pp. 749-753
    [12] 12 李晓飞, 宋亚男, 徐荣华, 陈君: ‘基于双目视觉的船舶跟踪与定位’, 南京信息工程大学学报(自然科学版), 2015, 7, (01), pp. 46-52
    [13] 13 Dissanayake, G., Huang, S., Wang, Z., and Ranasinghe, R.: ‘A review of recent developments in Simultaneous Localization and Mapping’ (2001. 2001)
    [14] 14 Fraundorfer, F., and Scaramuzza, D.: ‘Visual Odometry : Part II: Matching, Robustness, Optimization, and Applications’, IEEE Robotics Automation Magazine, 2012, 19, (2), pp. 78-90
    [15] 15 Mur-Artal, R., Montiel, J.M.M., and Tardós, J.D.: ‘ORB-SLAM: A Versatile and Accurate Monocular SLAM System’, IEEE Transactions on Robotics, 2015, 31, pp. 1147-1163
    [16] 16 Li, R., Wang, S., and Gu, D.: ‘DeepSLAM: A Robust Monocular SLAM System With Unsupervised Deep Learning’, IEEE Transactions on Industrial Electronics, 2021, 68, pp. 3577-3587
    [17] 17 Bescos, B., Fácil, J.M., Civera, J., and Neira, J.: ‘DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes’, IEEE Robotics and Automation Letters, 2018, 3, (4), pp. 4076-4083
    [18] 18 Han, X.F., Laga, H., and Bennamoun, M.: ‘Image-Based 3D Object Reconstruction: State-of-the-Art and Trends in the Deep Learning Era’, IEEE Trans Pattern Anal Mach Intell, 2021, 43, (5), pp. 1578-1604
    [19] 19 Li, C., Wang, S., Zhuang, Y., and Yan, F.: ‘Deep Sensor Fusion Between 2D Laser Scanner and IMU for Mobile Robot Localization’, Ieee Sensors Journal, 2021, 21, (6), pp. 8501-8509
    [20] 20 Davison, A.J., Reid, I.D., Molton, N.D., and Stasse, O.: ‘MonoSLAM: Real-Time Single Camera SLAM’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29, (6), pp. 1052-1067
    [21] 21 Klein, G., and Murray, D.: ‘Parallel Tracking and Mapping for Small AR Workspaces’, in Editor (Ed.)^(Eds.): ‘Book Parallel Tracking and Mapping for Small AR Workspaces’ (2007, edn.), pp. 225-234
    [22] 22 Mur-Artal, R., and Tardós, J.D.: ‘ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras’, IEEE Transactions on Robotics, 2017, 33, pp. 1255-1262
    [23] 23 Engel, J.J., Sch?ps, T., and Cremers, D.: ‘LSD-SLAM: Large-Scale Direct Monocular SLAM’, in Editor (Ed.)^(Eds.): ‘Book LSD-SLAM: Large-Scale Direct Monocular SLAM’ (2014, edn.), pp.
    [24] 24 Forster, C., Zhang, Z., Gassner, M., Werlberger, M., and Scaramuzza, D.: ‘SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems’, IEEE Transactions on Robotics, 2017, 33, (2), pp. 249-265
    [25] 25 Newcombe, R.A., Lovegrove, S.J., and Davison, A.J.: ‘DTAM: Dense tracking and mapping in real-time’, in Editor (Ed.)^(Eds.): ‘Book DTAM: Dense tracking and mapping in real-time’ (2011, edn.), pp. 2320-2327
    [26] 26 Kerl, C., Sturm, J., and Cremers, D.: ‘Dense visual SLAM for RGB-D cameras’, in Editor (Ed.)^(Eds.): ‘Book Dense visual SLAM for RGB-D cameras’ (2013, edn.), pp. 2100-2106
    [27] 27 Engel, J., Koltun, V., and Cremers, D.: ‘Direct Sparse Odometry’, IEEE Trans Pattern Anal Mach Intell, 2018, 40, (3), pp. 611-625
    [28] 28 Labbé, M., and Michaud, F.: ‘RTAB-Map as an open-source lidar and visual simultaneous localization and mapping library for large-scale and long-term online operation’, Journal of Field Robotics, 2019, 36, (2), pp. 416-446
    [29] 29 Whelan, T., Salas-Moreno, R.F., Glocker, B., Davison, A.J., and Leutenegger, S.: ‘ElasticFusion: Real-time dense SLAM and light source estimation’, The International Journal of Robotics Research, 2016, 35, (14), pp. 1697-1716
    [30] 30 Montemerlo, M., Thrun, S., Koller, D., and Wegbreit, B.: ‘FastSLAM: a factored solution to the simultaneous localization and mapping problem’, in Editor (Ed.)^(Eds.): ‘Book FastSLAM: a factored solution to the simultaneous localization and mapping problem’ (2002, edn.), pp.
    [31] 31 Grisetti, G., Stachniss, C., and Burgard, W.: ‘Improved Techniques for Grid Mapping With Rao-Blackwellized Particle Filters’, IEEE Transactions on Robotics, 2007, 23, (1), pp. 34-46
    [32] 32 Kohlbrecher, S., Stryk, O.v., Meyer, J., and Klingauf, U.: ‘A flexible and scalable SLAM system with full 3D motion estimation’, in Editor (Ed.)^(Eds.): ‘Book A flexible and scalable SLAM system with full 3D motion estimation’ (2011, edn.), pp. 155-160
    [33] 33 Zhang, J., and Singh, S.: ‘LOAM : Lidar Odometry and Mapping in real-time’, Robotics: Science and Systems Conference (RSS), 2014, pp. 109-111
    [34] 34 Zhang, J., and Singh, S.: ‘Visual-lidar odometry and mapping: low-drift, robust, and fast’, in Editor (Ed.)^(Eds.): ‘Book Visual-lidar odometry and mapping: low-drift, robust, and fast’ (2015, edn.), pp. 2174-2181
    [35] 35 Hess, W., Kohler, D., Rapp, H., and Andor, D.: ‘Real-time loop closure in 2D LIDAR SLAM’, in Editor (Ed.)^(Eds.): ‘Book Real-time loop closure in 2D LIDAR SLAM’ (2016, edn.), pp. 1271-1278
    [36] 36 Zhang, J., and Singh, S.: ‘Laser-visual-inertial Odometry and Mapping with High Robustness and Low Drift’, Journal of Field Robotics, 2018, 35, (8), pp. 1242-1264
    [37] 37 Shan, T., and Englot, B.: ‘LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain’, in Editor (Ed.)^(Eds.): ‘Book LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain’ (2018, edn.), pp. 4758-4765
    [38] 38 Ye, H., Chen, Y., and Liu, M.: ‘Tightly Coupled 3D Lidar Inertial Odometry and Mapping’, in Editor (Ed.)^(Eds.): ‘Book Tightly Coupled 3D Lidar Inertial Odometry and Mapping’ (2019, edn.), pp. 3144-3150
    [39] 39 Shan, T., Englot, B., Meyers, D., Wang, W., Ratti, C., and Rus, D.: ‘LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping’, in Editor (Ed.)^(Eds.): ‘Book LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping’ (2020, edn.), pp. 5135-5142
    [40] 40 张钊源: ‘基于双目视觉的移动机器人导航系统的研究与实现’. 硕士, 重庆大学, 2019
    [41] 41 de Medeiros Esper, I., Smolkin, O., Manko, M., Popov, A., From, P.J., and Mason, A.: ‘Evaluation of RGB-D Multi-Camera Pose Estimation for 3D Reconstruction’, Applied Sciences, 2022
    [42] 42 项志宇: ‘基于激光雷达的移动机器人障碍检测和自定位’. 博士, 浙江大学, 2002
    [43] 43 Newman, P., Cole, D., and Ho, K.: ‘Outdoor SLAM using visual appearance and laser ranging’, in Editor (Ed.)^(Eds.): ‘Book Outdoor SLAM using visual appearance and laser ranging’ (2006, edn.), pp. 1180-1187
    [44] 44 Fengchi, S., Yuan, Z., Chao, L., and Yalou, H.: ‘Research on active SLAM with fusion of monocular vision and laser range data’, in Editor (Ed.)^(Eds.): ‘Book Research on active SLAM with fusion of monocular vision and laser range data’ (2010, edn.), pp. 6550-6554
    [45] 45 Iocchi, L., Pellegrini, S., and Tipaldi, G.D.: ‘Building multi-level planar maps integrating LRF, stereo vision and IMU sensors’, in Editor (Ed.)^(Eds.): ‘Book Building multi-level planar maps integrating LRF, stereo vision and IMU sensors’ (2007, edn.), pp. 1-6
    [46] 46 Aycard, O., Baig, Q., Bota, S., Nashashibi, F., Nedevschi, S., Pantilie, C., Parent, M., Resende, P., and Vu, T.D.: ‘Intersection safety using lidar and stereo vision sensors’, in Editor (Ed.)^(Eds.): ‘Book Intersection safety using lidar and stereo vision sensors’ (2011, edn.), pp. 863-869
    [47] 47 Collier, J., Se, S., and Kotamraju, V.: ‘Multi-sensor Appearance-Based Place Recognition’, in Editor (Ed.)^(Eds.): ‘Book Multi-sensor Appearance-Based Place Recognition’ (2013, edn.), pp. 128-135
    [48] 48 Magree, D., and Johnson, E.N.: ‘Combined laser and vision-aided inertial navigation for an indoor unmanned aerial vehicle’, in Editor (Ed.)^(Eds.): ‘Book Combined laser and vision-aided inertial navigation for an indoor unmanned aerial vehicle’ (2014, edn.), pp. 1900-1905
    [49] 49 Wang, S., Kobayashi, Y., Ravankar, A.A., Ravankar, A., and Emaru, T.: ‘A Novel Approach for Lidar-Based Robot Localization in a Scale-Drifted Map Constructed Using Monocular SLAM’, SENSORS, 2019, 19, (10)
    [50] 50 Yin, L., Luo, B., Wang, W., Yu, H., Wang, C., and Li, C.: ‘CoMask: Corresponding Mask-Based End-to-End Extrinsic Calibration of the Camera and LiDAR’, REMOTE SENSING, 2020, 12, (12)
    [51] 51 Shin, Y.-S., Park, Y.S., and Kim, A.: ‘DVL-SLAM: sparse depth enhanced direct visual-LiDAR SLAM’, AUTONOMOUS ROBOTS, 2020, 44, (2), pp. 115-130
    [52] 52 Majdik, A.L., Szoke, I., Tamas, L., Popa, M., and Lazea, G.: ‘Laser and vision based map building techniques for mobile robot navigation’, in Editor (Ed.)^(Eds.): ‘Book Laser and vision based map building techniques for mobile robot navigation’ (2010, edn.), pp. 1-6
    [53] 53 Houben, S., Droeschel, D., and Behnke, S.: ‘Joint 3D laser and visual fiducial marker based SLAM for a micro aerial vehicle’, in Editor (Ed.)^(Eds.): ‘Book Joint 3D laser and visual fiducial marker based SLAM for a micro aerial vehicle’ (2016, edn.), pp. 609-614
    [54] 54 Mu, L., Yao, P., Zheng, Y., Chen, K., Wang, F., and Qi, N.: ‘Research on SLAM Algorithm of Mobile Robot Based on the Fusion of 2D LiDAR and Depth Camera’, IEEE ACCESS, 2020, 8, pp. 157628-157642
    [55] 55 Chen, S., Zhou, B., Jiang, C., Xue, W., and Li, Q.: ‘A LiDAR/Visual SLAM Backend with Loop Closure Detection and Graph Optimization’, REMOTE SENSING, 2021, 13, (14)
    [56] 56 Liang, X., Chen, H., Li, Y., and Liu, Y.: ‘Visual laser-SLAM in large-scale indoor environments’, in Editor (Ed.)^(Eds.): ‘Book Visual laser-SLAM in large-scale indoor environments’ (2016, edn.), pp. 19-24
    [57] 57 Lv, Q., Ma, J., Wang, G., and Lin, H.: ‘Absolute scale estimation of ORB-SLAM algorithm based on laser ranging’, in Editor (Ed.)^(Eds.): ‘Book Absolute scale estimation of ORB-SLAM algorithm based on laser ranging’ (2016, edn.), pp. 10279-10283
    [58] 58 Zhang, Y., Zhang, H., Xiong, Z., and Sheng, X.: ‘A Visual SLAM System with Laser Assisted Optimization’, in Editor (Ed.)^(Eds.): ‘Book A Visual SLAM System with Laser Assisted Optimization’ (2019, edn.), pp. 187-192
    [59] 59 Huang, S.S., Ma, Z.Y., Mu, T.J., Fu, H., and Hu, S.M.: ‘Lidar-Monocular Visual Odometry using Point and Line Features’, in Editor (Ed.)^(Eds.): ‘Book Lidar-Monocular Visual Odometry using Point and Line Features’ (2020, edn.), pp. 1091-1097
    [60] 60 Ali, W., Liu, P., Ying, R., and Gong, Z.: ‘A Feature Based Laser SLAM Using Rasterized Images of 3D Point Cloud’, IEEE Sensors Journal, 2021, 21, (21), pp. 24422-24430
    [61] 61 Kang, J., Zhang, Y., Liu, Z., Sit, A., and Sohn, G.: ‘RPV-SLAM: Range-augmented Panoramic Visual SLAM for Mobile Mapping System with Panoramic Camera and Tilted LiDAR’. Proc. 2021 20th International Conference on Advanced Robotics (ICAR)2021 pp. Pages
    [62] 62 Chou, C.-C., and Chou, C.-F.: ‘Efficient and Accurate Tightly-Coupled Visual-Lidar SLAM’, IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2021
    [63] 63 Radmanesh, R., Wang, Z., Chipade, V.S., Tsechpenakis, G., and Panagou, D.: ‘LIV-LAM: LiDAR and Visual Localization and Mapping’, in Editor (Ed.)^(Eds.): ‘Book LIV-LAM: LiDAR and Visual Localization and Mapping’ (2020, edn.), pp. 659-664
    [64] 64 Wang, K., Cao, C., Ma, S., and Ren, F.: ‘An Optimization-Based Multi-Sensor Fusion Approach Towards Global Drift-Free Motion Estimation’, IEEE Sensors Journal, 2021, 21, (10), pp. 12228-12235
    [65] 65 Yi, S., Worrall, S., and Nebot, E.: ‘Integrating Vision, Lidar and GPS Localization in a Behavior Tree Framework for Urban Autonomous Driving’, in Editor (Ed.)^(Eds.): ‘Book Integrating Vision, Lidar and GPS Localization in a Behavior Tree Framework for Urban Autonomous Driving’ (2021, edn.), pp. 3774-3780
    [66] 66 胡凯, 郑翡, 卢飞宇, 黄昱锟: ‘基于深度学习的行为识别算法综述’, 南京信息工程大学学报(自然科学版), 2021, 13, (06), pp. 730-743
    [67] 67 Mumuni, A., and Mumuni, F.: ‘CNN Architectures for Geometric Transformation-Invariant Feature Representation in Computer Vision: A Review’, SN Computer Science, 2021, 2, (5), pp. 340
    [68] 68 Covington, P., Adams, J., and Sargin, E.: ‘Deep Neural Networks for YouTube Recommendations’. Proc. Proceedings of the 10th ACM Conference on Recommender Systems, Boston, Massachusetts, USA2016 pp. Pages
    [69] 69 Ma, R., Wang, R., Zhang, Y., Pizer, S., McGill, S.K., Rosenman, J., and Frahm, J.-M.: ‘RNNSLAM: Reconstructing the 3D colon to visualize missing regions during a colonoscopy’, Medical Image Analysis, 2021, 72, pp. 102100
    [70] 70 Ai, Y., Rui, T., Lu, M., Fu, L., Liu, S., and Wang, S.: ‘DDL-SLAM: A Robust RGB-D SLAM in Dynamic Environments Combined With Deep Learning’, IEEE Access, 2020, 8, pp. 162335-162342
    [71] 71 Ma, F., and Karaman, S.: ‘Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image’, in Editor (Ed.)^(Eds.): ‘Book Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image’ (2018, edn.), pp. 4796-4803
    [72] 72 Kang, X., Li, J., Fan, X., and Wan, W.: ‘Real-Time RGB-D Simultaneous Localization and Mapping Guided by Terrestrial LiDAR Point Cloud for Indoor 3-D Reconstruction and Camera Pose Estimation’, APPLIED SCIENCES-BASEL, 2019, 9, (16)
    [73] 73 An, Y., Shi, J., Gu, D., and Liu, Q.: ‘Visual-LiDAR SLAM Based on Unsupervised Multi-channel Deep Neural Networks’, COGNITIVE COMPUTATION, 2022
    [74] 74 Cattaneo, D., Vaghi, M., Fontana, S., Ballardini, A.L., and Sorrenti, D.G.: ‘Global visual localization in LiDAR-maps through shared 2D-3D embedding space’, in Editor (Ed.)^(Eds.): ‘Book Global visual localization in LiDAR-maps through shared 2D-3D embedding space’ (2020, edn.), pp. 4365-4371
    [75] 75 EVO.Available online:https://github.com/MichaelGrupp/evo, accessed on 29 Setemper 2022
    [76] 76 SLAMBench2.Available online:https://github.com/MichaelGrupp/evo, accessed on 29 Septemper 2022
    [77] 77 Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R., and Furgale, P.: ‘Keyframe-based visual–inertial odometry using nonlinear optimization’, The International Journal of Robotics Research, 2014, 34, (3), pp. 314-334
    [78] 78 KITTI.Available online:http://www.cvlibs.net/datasets/kitti/,accessed on 29 Setemper 2022
    [79] 79 Oxford.Available online:http://robotcar-dataset.robots.ox.ac.uk/datasets/,accessed on 29 Setemper 2022
    [80] 80 ASL Kinect. Available online:http://projects.asl.ethz.ch/datasets/doku.php, accessed on 29 Setemper 2022
    [81] 81 ASL RGB-D.Available online:http://projects.asl.ethz.ch/datasets/doku.phpid=kmavvisualinertialdatasets#downloads, accessed on 29 Setemper 2022
    [82] 82 TUM RGB-D.Available online:https://vision.in.tum.de/data/datasets/rgbd-dataset, accessed on 29 Setemper 2022
    [83] 83 ICL-NUIM.Available online:https://www.doc.ic.ac.uk/~ahanda/VaFRIC/iclnuim.html, accessed on 29 Setemper 2022
    [84] 84 VaFRIC.Available online:http://www.doc.ic.ac.uk/~ahanda/VaFRIC/index.html,accessed on 29 Setemper 2022
    [85] 85 EuRoC.Available online:https://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets, accessed on 29 Setemper 2022
    [86] 86 TUM VI.Available online:https://vision.in.tum.de/data/datasets/visual-inertial-dataset,accessed on 29 Setemper 2022
    [87] 87 TUM monoVO. Available online:http://vision.in.tum.de/mono-dataset, accessed on 29 Setemper 2022
    相似文献
    引证文献
    引证文献 [0]
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

周铖君,陈炜峰,尚光涛,王曦杨,徐崇辉,李振雄.基于异构数据融合的SLAM研究综述[J].南京信息工程大学学报,,():

复制
分享
文章指标
  • 点击次数:327
  • 下载次数: 0
  • HTML阅读次数: 0
  • 引用次数: 0
历史
  • 收稿日期:2022-10-02
  • 最后修改日期:2023-11-19
  • 录用日期:2023-11-20

地址:江苏省南京市宁六路219号    邮编:210044

联系电话:025-58731025    E-mail:nxdxb@nuist.edu.cn

南京信息工程大学学报 ® 2025 版权所有  技术支持:北京勤云科技发展有限公司