Abstract:To achieve autonomous navigation in unstructured environments, unmanned vehicles need to analyze the traversability of the terrain. Currently, methods based on LiDAR and vision are used for this analysis, but LiDAR systems are limited by sparse point clouds and high costs, and traditional vision approaches fail to effectively capture and express the three-dimensional spatial conditions of the scene. Addressing these challenges, this paper introduces for the first time a method for analyzing traversability in unstructured environments based on Occupancy Prediction, named WildOcc. WildOcc extracts multi-scale features from monocular RGB images, projects 3D occupancy labels onto the images, and introduces a road attention mechanism to query points and fuse information to obtain 3D features, which are then output as traversable areas through a decoder and semantic segmentation head. To accurately estimate the three-dimensional traversability of the environment, WildOcc uses 3D occupancy labels for supervision; due to the sparsity of point cloud data in unstructured environments, this paper designs a data enhancement module called Dense Label Generate (DLG) to produce dense occupancy labels, improving the accuracy of the supervision results. Based on the DLG module, this paper produces the first dataset usable for occupancy prediction in unstructured environments. Comprehensive experiments conducted on this dataset show that, relative to occupancy prediction methods designed for structured environments, the DLG module improves the mIoU by 0.7%, and jointly with WildOcc, enhances the mIoU by 1%, effectively increasing the prediction accuracy and robustness.