基于双注意力CrossViT的微表情识别方法
作者:
作者单位:

作者简介:

通讯作者:

中图分类号:

TP391.4

基金项目:

重庆市技术创新与应用发展专项面上项目(cstc2020jscx-msxmX0190);重庆市教委科学技术研究重点项目(KJZD-K202100505)


Micro-expression recognition based on dual attention CrossViT
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    微表情是人们试图隐藏自己真实情绪时不由自主泄露出来的面部表情,是近年来情感计算领域的热点研究领域.微表情是一种细微的面部运动,难以捕捉其细微变化的特征.本文基于交叉注意力多尺度ViT (CrossViT)在图像分类领域的优异性能以及能够捕捉细微特征信息的能力,将CrossViT作为主干网络,对网络中的交叉注意力机制进行改进,提出了DA模块(Dual Attention)以扩展传统交叉注意力机制,确定注意力结果之间的相关性,从而提升了微表情识别精度.本网络从三个光流特征(即光学应变、水平和垂直光流场)中学习,这些特征是由每个微表情序列的起始帧和峰值帧计算得出,最后通过Softmax进行微表情分类.在微表情融合数据集上,UF1和UAR分别达到了0.727 5和0.727 2,识别精度优于微表情领域的主流算法,验证了本文提出网络的有效性.

    Abstract:

    Micro-expression is the facial expression that people reveal involuntarily when they try to hide their true emotions, which is a hot spot in research of affective computing in recent years.Micro-expression is a subtle facial movement thus is difficult to recognize.Considering its excellent performance in image classification and ability to capture subtle feature information, the cross-attention multiscale ViT (CrossViT) is used as the backbone network to improve the cross-attention mechanism in the network, and the Dual Attention (DA) module is proposed to extend traditional cross-attention mechanism to determine the correlation between attention results, thus improve the micro-expression recognition accuracy.The proposed network learns from three optical flow features (optical strain, horizontal and vertical optical flow fields), which are calculated from the starting frame and peak frame of each micro-expression sequence, and classifies the micro-expression by Softmax.Experiments on the micro-expression fusion dataset show that the proposed network reaches 0.727 5 and 0.727 2 in UF1 and UAR, respectively, which is more accurate than the mainstream micro-expression recognition algorithms, verifying the effectiveness of the dual attention CrossViT based network.

    参考文献
    相似文献
    引证文献
引用本文

冉瑞生,石凯,江小鹏,王宁.基于双注意力CrossViT的微表情识别方法[J].南京信息工程大学学报(自然科学版),2023,15(5):541-550
RAN Ruisheng, SHI Kai, JIANG Xiaopeng, WANG Ning. Micro-expression recognition based on dual attention CrossViT[J]. Journal of Nanjing University of Information Science & Technology, 2023,15(5):541-550

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2022-11-18
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2023-10-24
  • 出版日期:

地址:江苏省南京市宁六路219号    邮编:210044

联系电话:025-58731025    E-mail:nxdxb@nuist.edu.cn

南京信息工程大学学报 ® 2024 版权所有  技术支持:北京勤云科技发展有限公司