After decades of development, laser and visual SLAM technology has been relatively mature and widely used in military and civil fields. The technology allows robots to move in indoor and outdoor scenarios where GPS signals are scarce. However, SLAM technology relying only on a single sensor has its own limitations. For example,laser SLAM is not suitable for scenes with a large number of dynamic objects around it, and visual SLAM has poor robustness in low-texture environments. However, the fusion of the two technologies has great potential to learn from each other. Therefore, this paper predicts that SLAM technology combining laser and vision or even more sensors will be the mainstream direction in the future. This paper reviews the development of SLAM technology, analyzes the hardware information of LIDAR and camera, and gives some classical open-source algorithms and datasets. According to the algorithm used in the fusion sensor, the multi-sensor fusion scheme is introduced in detail from the perspective of traditional uncertainty based, feature-based and novel deep learning based. The excellent performance of multi-sensor fusion scheme in complex scenes is summarized, and the future development of the multi-sensor fusion scheme is prospected.