nav emailalert searchbtn searchbox tablepage yinyongbenwen piczone journalimg journalInfo journalinfonormal searchdiv searchzone qikanlogo popupnotification paper paperNew
2025, 05, v.51 443-448
基于递归层叠多模态情绪解析的学习效果评估系统设计
基金项目(Foundation): 宁波城市职业技术学院2024年度第一批校级科研项目(ZWX24047); 浙江省高职教育“十四五”第一批教学改革项目(JG20230114); 宁波市重点教科规划课题(2022YZD025)
邮箱(Email):
DOI: 10.20149/j.cnki.issn1008-1739.2025.05.004
摘要:

针对多模态情绪分析在学习状态检测领域的应用,设计了一种名为递归多模态情绪系统(Recursive Multimodal Emotion Framework, RMEF)的递归层叠框架模型。该模型能够整合学习状态中的各类特征进行分类,包括研讨场合、交互属性以及时序连贯性等内容。在多模态特征提取研究中,通过OpenFace2.0从多人视频资料中获取了面部关键点位置、面部运动单元类型、目光聚焦方向和头部姿态信息。同时,提出了一种结合视觉与听觉的方法来探查实训对话场景,该方法将面部特别是嘴部信息与同一瞬间的听觉信息融合,以捕获当下讲话者的情况。实验结果表明,RMEF模型不仅提高了算法识别的精确程度,还显著增强了模型的适应能力,在企业从教育类软件开发迈向智能化进程中展现出重要价值,为多模态情绪分析在学习状态检测中的应用提供了新的技术路径和解决方案。

Abstract:

Aiming at the application of multimodal emotion analysis in the field of learning state detection, a recursive cascaded framework model named Recursive Multimodal Emotion Framework(RMEF) is designed. The framework can integrate various features in the learning state for classification purposes, including discussion occasions, interaction attributes, and temporal coherence. In the research of multimodal feature extraction, facial landmark positions, facial motion units, gaze direction, and head posture information are obtained from multi-person video data using OpenFace2.0. Additionally, a novel approach combining visual and auditory cues is developed to investigate practical training dialogue scenarios, which can fuse facial information, especially mouth related features, with concurrent auditory information to capture the situation of the current speaker. Experimental results demonstrate that the RMEF model not only improves the algorithm recognition accuracy but also significantly improves the model adaptability, which shows considerable value as enterprises transition from educational software development toward intelligent systems. The research provides new technical pathways and solutions for the application of multimodal emotion analysis in learning state detection.

参考文献

[1] 李波燕.基于深度可分离卷积的在线教育学生听课表情识别算法[D].南昌:江西财经大学,2023.

[2] 韦南,殷丽华,宁洪,等.本科“机器学习”课程教学改革初探[J].网络与信息安全学报,2022,8(4):182-189.

[3] 尹帮治,徐健,唐超尘.基于图神经网络与表示学习的文本情感分析[J].南京师大学报(自然科学版),2024,47(3):97-103.

[4] AHADIT A B,JATOTH R K.A Novel Multi-feature Fusion Deep Neural Network Using HOG and VGG—Face for Facial Expression Classification[J].Machine Vision and Applications,2022,33:55.

[5] 贾熹滨,李宸,王珞,等.基于元优化特征解耦的多模态跨域情感分析算法[J/OL].计算机研究与发展,[2025-03-05][2025-03-13].https://link.cnki.net/urlid/11.1777.TP.20250305.1057.009.

[6] 刘伟,刘加慧,闫春.学习者“情感-认知-行为”深度特征聚类画像研究[J].甘肃科技,2024,40(11):101-105.

[7] 徐航.基于OPENFACE的学习者表情分析[J].数字技术与应用,2023,41(10):17-21.

[8] GOGU S R,SATHE S R.Ensemble Stacking for Grading Facial Paralysis Through Statistical Analysis of Facial Features[J].Traitement du Signal,2024,41:563-574.

[9] 梁岩,黄润才,卢士铖.基于改进3D ResNet18的多模态微表情识别[J].计算机应用研究,2025,42(3):903-910.

[10] 吴友政,李浩然,姚霆,等.多模态信息处理前沿综述:应用、融合和预训练[J].中文信息学报,2022,36(5):1-20.

基本信息:

DOI:10.20149/j.cnki.issn1008-1739.2025.05.004

中图分类号:TP391.41

引用信息:

[1]吴佳钰,李亚峰.基于递归层叠多模态情绪解析的学习效果评估系统设计[J].计算机与网络,2025,51(05):443-448.DOI:10.20149/j.cnki.issn1008-1739.2025.05.004.

基金信息:

宁波城市职业技术学院2024年度第一批校级科研项目(ZWX24047); 浙江省高职教育“十四五”第一批教学改革项目(JG20230114); 宁波市重点教科规划课题(2022YZD025)

检 索 高级检索

引用

GB/T 7714-2015 格式引文
MLA格式引文
APA格式引文