基于双注意力CrossViT的微表情识别方法
DOI:
作者:
作者单位:

重庆师范大学

作者简介:

通讯作者:

中图分类号:

基金项目:

重庆市技术创新与应用发展专项面上项目:基于学习者面部情感分析的在线学习评价系统研发与应用(cstc2020jscx -msxmX0190);重庆市教委科学技术研究重点项目:人脸行为分析及其在学习者情感评估中的应用研究(KJZD-K202100505)


Micro-expression Recognition Method Based on Dual Attention CrossViT
Author:
Affiliation:

Chongqing Normal University

Fund Project:

Chongqing Technology Innovation and Application Development Special Project: Research and Development and Application of Online Learning Evaluation System Based on Learner Facial Emotion Analysis (cstc2020jscx-msxmX0190); Key Project of Science and Technology Research of Chongqing Education Commission: Face Behavior Analysis and Its Application in Learner Emotion Assessment (KJZD-K202100505)

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    微表情是人们试图隐藏自己真实情绪时不由自主泄露出来的面部表情,是近年来情感计算领域的热点研究领域。但由于微表情的持续时间短、强度低,使得微表情识别是一项具有挑战性的任务。基于CrossViT在图像分类领域的优异性能,本文将CrossViT作为主干网络,对网络中的交叉注意力机制进行改进,提出了DA模块(Dual Attention)以扩展传统交叉注意力机制,确定注意力结果之间的相关性,从而提升了微表情识别精度。本网络从三个光流特征(即光学应变、水平和垂直光流场)中学习,这些特征是由每个微表情序列的起始帧和峰值帧计算得出,最后通过Softmax进行微表情分类。在微表情融合数据集上,UF1和UAR分别达到了0.7275和0.7272,识别精度优于微表情领域的主流算法,表明本文所提出网络的有效性。

    Abstract:

    Micro-expression is the facial expression that people reveal involuntarily when they try to hide their true emotions. It is a hot research field in the field of affective computing in recent years. However, micro-expression recognition is a challenging task due to its short duration and low intensity. Based on the excellent performance of CrossViT in the field of image classification, this paper uses CrossViT as the backbone network to improve the cross-attention mechanism in the network. DA module (Dual Attention) is proposed to extend the traditional cross-attention mechanism and determine the correlation between attention results. Thus, the precision of micro-expression recognition is improved. The network learns from three optical flow features (optical strain, horizontal and vertical optical flow fields), which are calculated from the starting frame and peak frame of each micro-expression sequence. Finally, the micro-expression is classified by Softmax. In the microexpression fusion data set, UF1 and UAR reach 0.7275 and 0.7272, respectively, and the recognition accuracy is better than the mainstream algorithms in the field of microexpression, indicating the effectiveness of the network proposed in this paper.

    参考文献
    相似文献
    引证文献
引用本文

冉瑞生,石凯,江小鹏,王宁.基于双注意力CrossViT的微表情识别方法[J].南京信息工程大学学报,,():

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2022-11-18
  • 最后修改日期:2023-02-11
  • 录用日期:2023-02-12
  • 在线发布日期:
  • 出版日期:

地址:江苏省南京市宁六路219号    邮编:210044

联系电话:025-58731025    E-mail:nxdxb@nuist.edu.cn

南京信息工程大学学报 ® 2024 版权所有  技术支持:北京勤云科技发展有限公司