基于多权自适应交互的运动模糊图像复原
作者:
作者单位:

1.沈阳理工大学;2.沈阳中科博微科技股份有限公司

基金项目:

国家重点研发计划2017YFC0821001-2


Motion Blurred Image Restoration Based on Multi-Weight Adaptive Interaction
Author:
Affiliation:

1.Shenyang Ligong University;2.Shenyang Shenyang Institute of Technology Shenyang Zhongke Bowei Technology Co., Ltd.University

Fund Project:

National key research and development plan 2017YFC0821001-2

  • 摘要
  • | |
  • 访问统计
  • |
  • 参考文献
  • | |
  • 引证文献
  • | |
  • 文章评论
    摘要:

    针对运动场景拍摄的图像出现不均匀模糊现象,导致工业环境下的机器视觉任务处理效率低下的问题,提出一种基于多权自适应交互的运动模糊图像复原算法。首先,采用多策略特征提取模块,从模糊图像中提取出浅层和关键的纹理信息并平滑噪声,同时构建残差语义块,深入挖掘图像的深层语义信息。然后,提出双通道自适应权重提取模块,从退化图像中捕获空间及像素的权重信息,并逐步将这些信息补偿到网络中。最后,设计出一种权重特征融合模块,融合网络所提取的多空间权重特征,并结合多项损失函数,进一步改善图像质量。所提算法在标准数据集下的主客观及消融实验结果显示,在标准数据集下的SSIM和PSNR指标达到0.93和31.89,各模块可以较好协调,在复原运动场景下的非均匀模糊图像方面具有显著优势。

    Abstract:

    To address the issue of inefficient processing of machine vision tasks in industrial environments caused by uneven blurring in images captured in moving scenes, a motion blur image restoration algorithm based on multi-weight adaptive interaction is proposed. Firstly, a multi-strategy feature extraction module is employed to extract shallow and critical texture information from blurred images and smooth noise. Then, a dual-channel adaptive weight extraction module is proposed to capture spatial and pixel weight information from degraded images. Meanwhile, a residual semantic block is constructed to deeply mine the deep semantic information of the image and gradually compensate this information into the network. Finally, a weighted feature fusion module is designed to fuse the multi-spatial weighted features extracted by the network, and multiple loss functions are combined to further improve image quality. The subjective, objective and ablation experimental results of the proposed algorithm in the standard data set show that the SSIM and PSNR indexes in the standard data set reach 0.93 and 31.89, and each module can be well coordinated, which has significant advantages in restoring non-uniform blurred images in moving scenes.

    参考文献
    [1] 姚镇海,周建平,邱新法.基于高速公路视频图像的能见度计算[J].南京信息工程大学学报(自然科学版),2019,11(1):85-90.
    YAO Zhenhai, Zhou Jianping, QIU Xinfa. Visibility calculations based on highway video images[J]. Journal of Nanjing University of Information Science & Technology(Natural Science Edition), 2019,11(1):85-90.
    [3] [2]成丽波,董伦,李喆,等.基于NSST与稀疏先验的遥感图像去模糊方法[J].吉林大学学报(理学版),2024,62(1):106-115.
    CHENG Libo, DONG Lun, LI Zhe, et al. Remote Sensing Image Deblurring Method Based on NSST and Sparse Prior[J]. Journal of Jilin University (Science Edition), 2024,62(1):106-115.
    [5] [3]杨琼,况姗芸,冯义东.基于全变差模型与卷积神经网络的模糊图像恢复[J].南京理工大学 学报,2022,46(3):277-283.
    YANG Qiong, KUANG Shanyun, FENG Yidong. Fuzzy Image Restoration Based on TV Model and CNN [J]. Journal of Nanjing University of Science and Technology, 2022,46(3):277-283.
    [7] [4]Sun J, Cao W, Xu Z, et al. Learning a convolutional neural network for non-uniform motion blur removal[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 769-777.
    [8] [5]Xu X, Pan J, Zhang Y J, et al. Motion blur kernel estimation via deep learning[J]. IEEE Transactions on Image Processing, 2017, 27(1): 194 - 205.
    [9] [6]Xu L, Zheng S, Jia J. Unnatural l0 sparse representation for natural image deblurring [C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2013: 1107-1114.
    [10] [7]Ren W, Zhang J, Pan J, et al. Deblurring dynamic scenes via spatially varying recurrent neural networks[J]. IEEE transactions on pattern analysis and machine intelligence, 2021,44(8):3974-3987.
    [11] [8]Wang Z, Cun X, Bao J, et al. Uformer: A General U-Shaped Transformer for Image Restoration[C]. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022: 17662–17672.
    [12] [9]张玉波, 王建阳, 韩爽, 等. 一种非对称的轻量级图像盲去模糊网络[J]. 吉林大学学报: 理学版, 2023,61(2): 362-370.
    ZHANG Yubo, WANG Jianyang, HAN Shuang, et al. An asymmetric lightweight image blind deblurring network[J]. Journal of Jilin University (Science Edition), 2023, 61(2): 362-370.
    [14] [10]刘忠洋,周杰,陆加新,等.基于注意力机制的多尺度特征融合图像去雨方法[J].南京信息工程大学学报(自然科学版),2023,15(05):505-513.
    LIU Zhongyang, ZHOU Jie, LU Jiaxin, et al. Multi-scale feature fusion image deraining method based on attention mechanism[J]. Journal of Nanjing University of Information Science & Technology(Natural Science Edition), 2023, 15(05): 505-513.
    [16] [11]Hu J, Shen L, Sun G. Squeeze-and-excitation networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 7132-7141.
    [17] [12]Zhao H, Kong X, He J, et al. Efficient image super-resolution using pixel attention[C]//Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16. Springer International Publishing, 2020: 56-72.
    [18] [13]Qin X, Wang Z, Bai Y, et al. FFA-Net: Feature fusion attention network for single image dehazing[C]//Proceedings of the AAAI conference on artificial intelligence. 2020, 34(07): 11908-11915.
    [19] [14]Qin Z, Zhang P, Wu F, et al. Fcanet: Frequency channel attention networks[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 783-792.
    [20] [15]陈栋,李明,李莉,等.基于双池化注意力机制的高光谱图像分类算法[J].南京信息工程大学学报(自然科学版),2023,15(04):393-402.
    CHEN Dong, LI Ming, LI Li, et al. Hyperspectral image classification based on double pool attention mechanism[J]. Journal of Nanjing University of Information Science & Technology(Natural Science Edition),2023,15(04):393-402.
    [22] [16]Zhong F, He K, Ji M, et al. Optimizing vitiligo diagnosis with ResNet and Swin transformer deep learning models: a study on performance and interpretability[J]. Scientific Reports, 2024, 14(1): 9127.
    [23] [17]杜洪波,袁雪丰,刘雪莉,等.基于扩散过程的生成对抗网络图像修复算法[J/OL].南京信息工程大学学报:1-11[2024-04-26].
    DU Hongbo, YUAN Xuefeng, LIU Xueli, et al. Generative Adversarial Network Image Restoration Algorithm Based on Diffusion Process[J/OL]. Journal of Nanjing University of Information Science & Technology: 1-11 [2024-04-26].
    [25] [18]Gao X, Fang Y. A note on the generalized degrees of freedom under the L1 loss function[J]. Journal of statistical planning and inference, 2011, 141(2): 677-686.
    [26] [19]Zuo Y, Wu Q, Fang Y, et al. Multi-scale frequency reconstruction for guided depth map super-resolution via deep residual network[J]. IEEE Transactions on Circuits and Systems for Video Technology, 2019, 30(2): 297-306.
    [27] [20]Abubakar A B, Kumam P, Mohammad H, et al. A Barzilai-Borwein gradient projection method for sparse signal and blurred image restoration[J]. Journal of the Franklin Institute, 2020, 357(11): 7266-7285.
    [28] [21]Zhang K, Luo W, Zhong Y, et al. Deblurring by realistic blurring[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 2737-2746.
    [29] [22]Zhang H, Dai Y, Li H, et al. Deep stacked hierarchical multi-patch network for image deblurring[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 5978-5986.
    [30] [23]Park D, Kang D U, Kim J, et al. Multi-temporal recurrent neural networks for progressive non-uniform single image deblurring with incremental temporal training[C]//European Conference on Computer Vision. Cham: Springer International Publishing, 2020: 327-343.
    [31] [24]Sara U, Akter M, Uddin M S. Image quality assessment through FSIM, SSIM, MSE and PSNR—a comparative study[J]. Journal of Computer and Communications, 2019, 7(3): 8-18.
    [32] [25]Tanchenko A. Visual-PSNR measure of image quality[J]. Journal of Visual Communication and Image Representation, 2014, 25(5): 874-878.
    [33] [26]Feng Y, Han B, Wang X, et al. Self-Supervised Transformers for Unsupervised SAR Complex Interference Detection Using Canny Edge Detector[J]. Remote Sensing, 2024, 16(2): 306.
    [34] [27]Lindenberger P, Sarlin P E, Pollefeys M. Lightglue: Local feature matching at light speed[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 17627-17638.
    [35] [28]Wang C Y, Bochkovskiy A, Liao H Y M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023: 7464-7475.
    相似文献
    引证文献
    网友评论
    网友评论
    分享到微博
    发 布
引用本文

朱立忠,曹旭琪,李军.基于多权自适应交互的运动模糊图像复原[J].南京信息工程大学学报,,():

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2024-04-26
  • 最后修改日期:2024-06-02
  • 录用日期:2024-06-03

地址:江苏省南京市宁六路219号    邮编:210044

联系电话:025-58731025    E-mail:nxdxb@nuist.edu.cn

南京信息工程大学学报 ® 2025 版权所有  技术支持:北京勤云科技发展有限公司