LESO-Net A Lightweight and Efficient Segmentation Network for Small Object
Author:
Affiliation:

Nanjing University of Information Science and Technology

Fund Project:

The National Natural Science Foundation of China (General Program, Key Program, Major Research Plan)

  • Article
  • | |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • | |
  • Comments
    Abstract:

    Small objects in images often present significant challenges during segmentation due to their irregular shapes and blurred boundaries. These challenges primarily include difficulties in feature extraction, loss of edge details, and significant noise interference. To effectively address these challenges, we propose an efficient small object segmentation model named LESO-Net, a lightweight and efficient object segmentation Network for small objects, based on You Only Look Once (YOLO) v8n-seg. Initially, we integrate a Large Separable Convolution Attention (LSKA) module into the Neck network, which not only enhances segmentation accuracy but also reduces computational complexity and memory usage. In addition, to specifically address the unstable shapes of small objects, we replace the C2f module in the backbone model with our improved Deformable Convolutional Networks (DCNv2). This modification significantly enhances feature extraction and adaptive generalization capabilities for small objects of varying shapes. Furthermore, to further improve the model's performance, we ameliorate the loss function, thereby effectively tackling class imbalances and insufficient bounding box accuracy. We validated the effectiveness of LESO-Net on a self-constructed bubble dataset and a public high-resolution SAR images dataset (HRSID). Compared to the original version of YOLOv8n-seg, LESO-Net achieved a precision improvement of 1.2 percentage points and 2.5 percentage points, along with average precision enhancements of 0.2 percentage point and 1.2 percentage points, respectively, while the number of parameters was reduced by nearly 10%. This result effectively demonstrates the superior performance of the LESO-Net model, meeting the accuracy requirements for segmenting small targets in practical remote sensing scenarios.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:December 28,2024
  • Revised:February 11,2025
  • Adopted:February 13,2025
Article QR Code

Address:No. 219, Ningliu Road, Nanjing, Jiangsu Province

Postcode:210044

Phone:025-58731025