WO2023060720A1 - 情绪状态展示方法、装置及系统 - Google Patents

情绪状态展示方法、装置及系统 Download PDF

Info

Publication number
WO2023060720A1
WO2023060720A1 PCT/CN2021/133513 CN2021133513W WO2023060720A1 WO 2023060720 A1 WO2023060720 A1 WO 2023060720A1 CN 2021133513 W CN2021133513 W CN 2021133513W WO 2023060720 A1 WO2023060720 A1 WO 2023060720A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
emotional
facial image
emotional state
expression
Prior art date
Application number
PCT/CN2021/133513
Other languages
English (en)
French (fr)
Inventor
栗觅
胡斌
吕胜富
Original Assignee
北京工业大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京工业大学 filed Critical 北京工业大学
Publication of WO2023060720A1 publication Critical patent/WO2023060720A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety

Definitions

  • the present disclosure relates to the technical field of artificial intelligence, and in particular to a method, device and system for displaying emotional states.
  • the main emotional manifestations of stress are psychological tension, palpitation, irritability, emotional instability, etc.
  • the main emotional manifestations of anxiety are fear, fear, upset, fear, etc.
  • the main emotional manifestations of depression are low mood, distress and sadness, anhedonia, and loss of interest.
  • Abnormal emotional states such as stress, anxiety, and depression, if the severity cannot be checked and evaluated accurately in time, and psychological intervention is carried out in a timely manner, it will lead to the possibility of these abnormal emotional states developing into anxiety or depression.
  • the evaluation and discrimination of mental state mainly use various psychological self-evaluation scales (such as GAT-7 self-evaluation anxiety scale, PHQ-9 self-evaluation depression scale, etc.).
  • the evaluation accuracy is low.
  • the evaluation results of the self-evaluation scale are usually displayed by numerical values.
  • the analysis and understanding of numerical values usually requires a lot of professional knowledge. The analysis process is greatly affected by subjective factors, and the evaluators cannot intuitively Know your own state of mind.
  • Embodiments of the present disclosure provide a method, device and system for displaying emotional states, which can improve the accuracy of displaying emotional states.
  • an embodiment of the present disclosure provides a method for displaying an emotional state, including:
  • enhancing the region of interest in the expression feature image according to the emotion index to obtain the target feature image includes:
  • the emotional state display method provided in this embodiment can further improve the correlation between the expression pattern image and the emotional state by calculating the weight coefficient according to the emotional index.
  • calculating the weight coefficient of the expression feature image according to the emotional index includes:
  • An emotional state display method provided in this embodiment calculates a weight coefficient through the first gradient map and the second gradient map, so that the weight coefficient can reflect the importance of the corresponding expression feature image.
  • performing feature extraction on the facial image to obtain an expression feature image includes:
  • the facial image is input into the fourth network model comprising a plurality of convolution kernels to obtain a sub-feature map corresponding to the convolution kernel one by one;
  • the emotional state display method provided in this embodiment helps to strengthen the sub-feature maps according to their correlation with the emotional state through weighted fusion of the self-feature maps.
  • the facial image before the facial image is input into the first network model to obtain the emotional index, it also includes:
  • Noise reduction is performed on the downsampled face image.
  • the face image is down-sampled before the noise reduction processing, which helps to improve the noise reduction effect.
  • a plurality of target feature images corresponding to the facial image are superimposed on the facial image to obtain an expression pattern image for displaying an emotional state.
  • the method for presenting an emotional state provided in this embodiment is helpful to enhance the data related to the emotional state in the expression pattern image by superimposing the target feature image generated by multiple facial images on the facial image.
  • acquiring the subject's facial image based on the emotional stimulation signal includes:
  • the facial images of the subjects are acquired.
  • the embodiment of the present disclosure provides a mental state display device, including:
  • An acquisition module configured to acquire the subject's facial image based on the emotional stimulation signal
  • the first data processing module is used to input the facial image into the first network model to obtain the emotional index
  • a feature extraction module used to perform feature extraction on the facial image to obtain an expression feature image
  • the second data processing module is used to adjust the intensity of the expression feature image according to the emotional index to obtain the target feature image
  • a third data processing module configured to superimpose the target feature image on the facial image to obtain an expression pattern image for displaying an emotional state.
  • an embodiment of the present disclosure provides a mental state presentation system, including: .
  • the emotional stimulation module is used to provide the video or audio of setting emotions to the subjects to watch;
  • Facial image collection module for collecting the facial image when the subject watches the video or listens to the audio
  • the data processing module is used to process the facial image to obtain an expression pattern image for displaying the emotional state and send it to the feedback module;
  • the feedback module is used to display the expression pattern image to the subject.
  • the feedback module includes an emotional abnormality discrimination module and a display module;
  • the data processing module is also used to obtain an emotional index according to the facial image and send it to the abnormal emotion discrimination module;
  • Described emotional abnormality distinguishing module is used for generating emotional abnormality risk level according to described emotional index, and described risk level is sent to described display module;
  • the display module is used for displaying the expression pattern image, and displaying a warning image according to the risk level of abnormal emotion.
  • the emotional state display method provided by the embodiment of the present disclosure can preliminarily confirm the abnormal risk of the emotional state of the subject through the first network model to obtain the emotional index, and can further obtain image information related to the emotional state by performing feature extraction and partial enhancement on the facial image , the expression pattern image obtained by superimposing the target feature image on the facial image can intuitively show the emotional state of the subject.
  • Fig. 1 is a flowchart of an emotional state display method according to an embodiment of the present disclosure
  • Fig. 2 is a structural block diagram of an emotional state display device according to an embodiment of the present disclosure
  • Fig. 3 is an emotional state presentation system according to an embodiment of the present disclosure.
  • 21 acquisition module; 22: first data processing module; 23: feature extraction module; 24: second data processing module; 25: third data processing module;
  • 31 Emotional stimulation module; 32: Facial image acquisition module; 33: Data processing module; 34: Feedback module.
  • Fig. 1 is a flow chart of a method for presenting an emotional state according to an embodiment of the present disclosure. As shown in Figure 1, the embodiment of the present disclosure provides a method for displaying an emotional state, including the following steps:
  • the emotional stimulation signals include emotional stimulation signals corresponding to positive emotions and emotional stimulation signals corresponding to negative emotions. Positive emotions can be happiness and negative emotions can be sadness, fear or nervousness. Emotional state can be selected as anxiety, depression or stress. When the emotional state is anxiety, facial images can be acquired through emotional stimulation signals corresponding to happiness and fear respectively. When the emotional state is depressed, facial images can be acquired through emotional stimulation signals corresponding to happiness and sadness. When the emotional state is stress, it is optional to obtain facial images through emotional stimulation signals corresponding to happiness and tension. Emotional stimuli can be selected as video, virtual reality scenes or audio. When the emotional stimulus signal is audio, it is optional to collect the expression of the subject when he closes his eyes and listens to the audio, so as to minimize the interference of environmental factors.
  • the face image Before inputting the face image into the first network model, the face image can be down-sampled first, then the down-sampled face image can be de-noised, and finally the de-noised face image can be cropped to reduce noise impact. The impact of subsequent data processing.
  • the higher the emotional index the greater the risk of abnormal emotional state.
  • the training process of the first network model includes acquiring a facial image as a training sample, acquiring an emotional index corresponding to the facial image as a label, and training the initial network model to obtain the first network model.
  • the first network model may be a convolutional neural network model.
  • S103 Perform feature extraction on the face image to obtain an expression feature image.
  • the expression feature image includes features related to expression in the face image.
  • S104 Enhance the region of interest in the expression feature image according to the emotion index to obtain the target feature image.
  • the region of interest can be selected as the region that has an influence on the emotional state.
  • S105 Superimpose the target feature image on the facial image to obtain an expression pattern image for displaying an emotional state.
  • Expression pattern images are able to highlight facial features associated with emotional states.
  • the emotional state display method provided by the embodiment of the present disclosure can preliminarily confirm the abnormal risk of the emotional state of the subject through the first network model to obtain the emotional index, and can further obtain image information related to the emotional state by performing feature extraction and partial enhancement on the facial image , the expression pattern image obtained by superimposing the target feature image on the facial image can intuitively show the emotional state of the subject.
  • the facial image is input into the second network model to obtain the first prediction vector; the first gradient map of the expression feature image is obtained according to the first prediction vector; The average gradient value of the gradient map is used as the weight coefficient of the expression feature image; if not, then the facial image is input into the third network model to obtain the second prediction vector; obtain the second gradient map of the expression feature image according to the second prediction vector; calculate the second The average gradient value of the gradient map is used as the weight coefficient of the expression feature image.
  • the second network model and the third network model are deep learning network models.
  • Facial image input comprises the fourth network model of a plurality of convolution kernels to obtain the sub-feature map corresponding to the convolution kernel one-to-one; According to the distribution of expression in the facial image, generate a weight vector corresponding to the convolution kernel one-to-one; The weight vector corresponding to the sub-feature map performs weighted fusion on the sub-feature map to obtain the expression feature image.
  • acquiring the facial image of the subject based on the emotional stimulation signal includes: acquiring a plurality of emotions corresponding to the emotional state.
  • the face image of the subject is acquired based on the emotional stimulation signal corresponding to the emotion one by one.
  • a plurality of target feature images corresponding to the facial image are superimposed on the facial image to obtain an expression pattern image for displaying an emotional state.
  • Fig. 2 is a structural block diagram of an emotional state display device according to an embodiment of the present disclosure. As shown in Figure 2, an embodiment of the present disclosure provides an emotional state display device, including:
  • Acquisition module 21 for obtaining the facial image of the subject based on the emotional stimulation signal
  • the first data processing module 22 is used to input facial images into the first network model to obtain emotional index
  • Feature extraction module 23 is used for carrying out feature extraction to facial image and obtains expression feature image
  • the second data processing module 24 is used to adjust the intensity of the expression feature image according to the emotion index to obtain the target feature image
  • the third data processing module 25 is configured to superimpose the target feature image on the facial image to obtain an expression pattern image for displaying an emotional state.
  • the emotional state display device provided by the embodiment of the present disclosure can preliminarily confirm the abnormal risk of the emotional state of the subject through the emotional index obtained by the first network model, and can further obtain image information related to the emotional state by performing feature extraction and partial enhancement on the facial image , the expression pattern image obtained by superimposing the target feature image on the facial image can intuitively show the emotional state of the subject.
  • Fig. 3 is an emotional state presentation system according to an embodiment of the present disclosure. As shown in Figure 3, the embodiment of the present disclosure provides an emotional state display system, including:
  • the emotion stimulation module 31 is used for providing the video or audio of setting emotion to the subject to watch.
  • the emotional stimulation module can optionally use the display or earphones of the machine (including but not limited to mobile terminals, tablet computers, desktop computers and notebook computers) to present videos of positive and negative emotions to the testee.
  • Emotional stimulus signals can be selected as video or audio.
  • the facial image acquisition module 32 is used to acquire the facial images of the subject when watching the video.
  • the facial image acquisition module can optionally use the local camera (including but not limited to mobile phones, tablets, desktops and laptops) (including external cameras) to synchronously collect the facial images of the subject, and then save the facial images to the local computer (including but not limited to mobile phones, tablet computers, desktops and laptops) or sent to servers (including but not limited to local servers and cloud servers).
  • the data processing module 33 is used to process the facial image to obtain an expression pattern image for showing the emotional state and send it to the feedback module.
  • the data processing module can optionally input the facial image into the first network model to obtain the emotional index, perform feature extraction on the facial image to obtain an expression feature image, and enhance the region of interest in the expression feature image according to the emotional index to obtain a target feature image. Superimpose the target feature image to obtain the expression pattern image for displaying the emotional state.
  • the data processing module can optionally process facial images locally (including but not limited to mobile terminals, tablet computers, desktops and notebook computers) or servers (including but not limited to local servers or cloud servers).
  • the feedback module 34 is used for displaying the image of the expression pattern to the subject.
  • the feedback module includes an emotional abnormality discrimination module and a display module.
  • the display module can optionally include a mobile terminal display, a notebook computer display or a desktop computer display.
  • the display module can optionally include a local display (including but not limited to a mobile terminal, a tablet computer, a desktop computer and a notebook computer).
  • the data processing module is also used to obtain the emotional index according to the facial image and send it to the emotional abnormality discrimination module.
  • the emotional abnormality discrimination module is used to generate emotional abnormality risk levels according to the emotional index, and send the risk levels to the display module.
  • the display module is used to display images in expression patterns, and display warning images according to the risk level of abnormal emotions.
  • the risk level of emotional abnormality can be selected according to the threshold range of the emotional index from small to large, which can be divided into level 1, level 2, level 3 and level 4.
  • the emotional state of the first level is normal, the corresponding warning image of the first level can be selected as white bars, the emotional state of the second level is mildly abnormal, the corresponding warning image of the second level can be selected as green bars, and the emotional state of the third level If it is a moderate abnormality, the warning image corresponding to the third level can be selected as a blue bar, and the emotional state of the fourth level is a severe abnormality, and the early warning image corresponding to the fourth level can be selected as a red bar.
  • the system provided by the embodiments of the present disclosure can determine whether the emotional state is abnormal and the severity of the abnormality and display it intuitively to the testee, and the display form of the image is easier to understand.
  • the present invention can measure and evaluate the emotional state of the testee more objectively, so the present invention is of great value for realizing self-health management and improving the quality of life.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Psychiatry (AREA)
  • Engineering & Computer Science (AREA)
  • Pathology (AREA)
  • Medical Informatics (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Physics & Mathematics (AREA)
  • Developmental Disabilities (AREA)
  • Biophysics (AREA)
  • Child & Adolescent Psychology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Educational Technology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

本公开提供了一种情绪状态展示方法、装置及系统,该方法包括:基于情绪刺激信号获取被测者的面部图像;将面部图像输入第一网络模型得到情绪指数;对面部图像进行特征提取得到表情特征图像;根据情绪指数对表情特征图像中感兴趣的区域进行增强得到目标特征图像;在面部图像上叠加目标特征图像得到用于展示情绪状态的表情模式图像。方该法通过第一网络模型得到情绪指数能够初步确认被测者情绪状态的异常风险,通过对面部图像进行特征提取和部分增强能够进一步获取与情绪状态相关的图像信息,通过在面部图像上叠加目标特征图像得到的表情模式图像能够直观地展示被测者的情绪状态。

Description

情绪状态展示方法、装置及系统
交叉引用
本申请基于申请号为202111178893.5、申请日为2021年10月11日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本公开涉及人工智能技术领域,尤其涉及一种情绪状态展示方法、装置及系统。
背景技术
随着经济社会的发展,人们对于学习、工作和生活的质量与效率的要求也随之提高,导致人们心理压力增大。长期的心理压力如果得不到舒缓,就会产生心理异常,进而发展成焦虑情绪和抑郁情绪。来自互联网-中新网2020年12月29日的报道:病患从重症监护病房出来后,约40%的人会出现焦虑症状,约30%的人会出现抑郁症状。
无论心理压力还是焦虑、抑郁,都会表现出情感障碍:压力的主要情绪表现是感到心理紧张、心慌、烦躁不安、情绪不稳定等;焦虑的主要情绪表现为恐惧、害怕、心烦意乱、提心吊胆等;抑郁的主要情绪表现是情绪低落、苦恼忧伤、快感缺失、兴趣减低等。
压力、焦虑和抑郁等异常情绪状态,如果不能及时准确地检查和评估其严重程度,及时地进行心理干预,将会导致这些异常情绪状态 发展成焦虑症或抑郁症的可能。
目前,心理状态的评估和判别主要使用各种心理自评量表(例如GAT-7自评焦虑量表、PHQ-9自评抑郁量表等),这些自评量表由于缺乏与心境直接相关的情感指标,其评估准确度较低,自评量表的评估结果通常通过数值展示,对于数值的分析理解通常需要大量的专业知识,分析过程受主观因素影响较大,被评估者不能直观地了解自身的心理状态。
发明内容
本公开实施例提供了一种情绪状态展示方法、装置及系统,能够提高情绪状态展示的准确性。
为此,本公开实施例提供了如下技术方案:
第一方面,本公开实施例提供了一种情绪状态展示方法,包括:
基于情绪刺激信号获取被测者的面部图像;
将所述面部图像输入第一网络模型得到情绪指数;
对所述面部图像进行特征提取得到表情特征图像;
根据所述情绪指数对所述表情特征图像中感兴趣的区域进行增强得到目标特征图像;
在所述面部图像上叠加所述目标特征图像得到用于展示情绪状态的表情模式图像。
可选地,根据所述情绪指数对所述表情特征图像中感兴趣的区域进行增强得到目标特征图像包括:
根据所述情绪指数计算所述表情特征图像的权重系数;
将所述表情特征图像中的每个像素点与所述权重系数相乘得到所述目标特征图像;
其中,所述情绪指数越高所述权重系数越大。
本实施例提供的一种情绪状态展示方法通过根据情绪指数计算权重系数能够进一步提高表情模式图像与情绪状态的关联性。
可选地,根据所述情绪指数计算所述表情特征图像的权重系数包括:
判断所述情绪指数是否大于设定值;
若是,则将所述面部图像输入第二网络模型得到第一预测向量;
根据所述第一预测向量获取所述表情特征图像的第一梯度图;
计算所述第一梯度图的平均梯度值作为所述表情特征图像的权重系数;
若否,则将所述面部图像输入第三网络模型得到第二预测向量;
根据所述第二预测向量获取所述表情特征图像的第二梯度图;
计算所述第二梯度图的平均梯度值作为所述表情特征图像的权重系数。
本实施例提供的一种情绪状态展示方法通过第一梯度图和第二梯度图计算权重系数,使得权重系数能够反应对应的表情特征图像的重要程度。
可选地,对所述面部图像进行特征提取得到表情特征图像包括:
将所述面部图像输入包括多个卷积核的第四网络模型得到与所 述卷积核一一对应的子特征图;
根据表情在面部图像中的分布生成与所述卷积核一一对应的权重向量;
根据与所述子特征图对应的权重向量对所述子特征图进行加权融合得到所述表情特征图像。
本实施例提供的一种情绪状态展示方法,通过对自特征图进行加权融合有助于根据子特征图与情绪状态的关联性进行分别加强。
可选地,将所述面部图像输入第一网络模型得到情绪指数之前,还包括:
对所述面部图像进行下采样处理;
对下采样处理后的面部图像进行降噪处理。
本实施例提供的一种情绪状态展示方法,在降噪处理前先对面部图像进行下采样处理,有助于提高降噪效果。
可选地,所述面部图像为多个;
在所述面部图像上叠加与所述面部图像对应的多个目标特征图像得到用于展示情绪状态的表情模式图像。
本实施例提供的一种情绪状态展示方法,通过在面部图像上叠加多个面部图像生成的目标特征图像有助于增强表情模式图像中与情绪状态相关的数据。
可选地,基于情绪刺激信号获取被测者的面部图像包括:
获取与所述情绪状态对应的多个情绪;
基于与所述情绪一一对应的情绪刺激信号获取被测者的面部图 像。
第二方面,本公开实施例提供了一种心理状态展示装置,包括:
获取模块,用于基于情绪刺激信号获取被测者的面部图像;
第一数据处理模块,用于将所述面部图像输入第一网络模型得到情绪指数;
特征提取模块,用于对所述面部图像进行特征提取得到表情特征图像;
第二数据处理模块,用于根据所述情绪指数对所述表情特征图像的强弱进行调整得到目标特征图像;
第三数据处理模块,用于在所述面部图像上叠加所述目标特征图像得到用于展示情绪状态的表情模式图像。
第三方面,本公开实施例提供了一种心理状态展示系统,包括:。
情绪刺激模块,用于将设定情绪的视频或音频提供给被测者观看;
面部图像采集模块,用于采集被测者观看所述视频或倾听所述音频时的面部图像;
数据处理模块,用于对所述面部图像进行处理,得到用于展示情绪状态的表情模式图像并发送给反馈模块;
反馈模块,用于向被测者展示所述表情模式图像。
可选地,所述反馈模块包括情绪异常判别模块和显示模块;
所述数据处理模块还用于根据所述面部图像获取情绪指数并发送给所述情绪异常判别模块;
所述情绪异常判别模块用于根据所述情绪指数生成情绪异常风 险等级,并将所述风险等级发送给所述显示模块;
所述显示模块用于显示所述表情模式图像,并根据所述情绪异常风险等级显示预警图像。
本公开实施例中提供的一个或多个技术方案,具有如下优点:
本公开实施例提供的情绪状态展示方法通过第一网络模型得到情绪指数能够初步确认被测者情绪状态的异常风险,通过对面部图像进行特征提取和部分增强能够进一步获取与情绪状态相关的图像信息,通过在面部图像上叠加目标特征图像得到的表情模式图像能够直观地展示被测者的情绪状态。
附图说明
图1是根据本公开一实施方式的情绪状态展示方法的流程图;
图2是根据本公开一实施方式的情绪状态展示装置的结构框图;
图3是根据本公开一实施方式的情绪状态展示系统。
附图标记:
21:获取模块;22:第一数据处理模块;23:特征提取模块;24:第二数据处理模块;25:第三数据处理模块;
31:情绪刺激模块;32:面部图像采集模块;33:数据处理模块;34:反馈模块。
具体实施方式
为使本公开的目的、技术方案和优点更加清楚明了,下面结合具体实施方式并参照附图,对本公开进一步详细说明。应该理解,这些描述只是示例性的,而并非要限制本公开的范围。此外,在以下说明中,省略了对公知结构和技术的描述,以避免不必要地混淆本公开的概念。
本公开所描述的实施例是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。
在本公开的描述中,需要说明的是,术语“第一”、“第二”、“第三”仅用于描述目的,而不能理解为指示或暗示相对重要性。
此外,下面所描述的本公开不同实施方式中所涉及的技术特征只要彼此之间未构成冲突就可以相互结合。
图1是根据本公开一实施方式的情绪状态展示方法的流程图。如图1所示,本公开实施例提供了一种情绪状态展示方法,包括如下步骤:
S101:基于情绪刺激信号获取被测者的面部图像。情绪刺激信号包括正性情绪对应的情绪刺激信号和负性情绪对应的情绪刺激信号。正性情绪可选为快乐,负性情绪可选为悲伤、恐惧或紧张。情绪状态可选为焦虑、抑郁或压力。情绪状态为焦虑时,可选分别通过快乐和恐惧对应的情绪刺激信号获取面部图像。情绪状态为抑郁时,可选通过快乐和悲伤对应的情绪刺激信号获取面部图像。情绪状态为压力时,可选通过快乐和紧张对应的情绪刺激信号获取面部图像。情绪刺激信 号可选为视频、虚拟现实场景或音频。情绪刺激信号为音频时,可选采集被测者闭眼倾听音频时的表情,最大限度地减少环境因素的干扰。
S102:将面部图像输入第一网络模型得到情绪指数。将面部图像输入第一网络模型前,可选先对面部图像进行下采样处理,再对下采样处理后的面部图像进行降噪处理,最后对降噪处理后的面部图像进行裁剪,降低噪声对后续数据处理过程的影响。在一些实施例中,情绪指数越高,情绪状态的异常风险越大。在一些实施例中,第一网络模型的训练过程包括,获取面部图像作为训练样本,获取与面部图像对应的情绪指数作为标签,对初始网络模型进行训练得到第一网络模型。第一网络模型可选为卷积神经网络模型。
S103:对面部图像进行特征提取得到表情特征图像。表情特征图像包括面部图像中与表情相关的特征。
S104:根据情绪指数对表情特征图像中感兴趣的区域进行增强得到目标特征图像。感兴趣的区域可选为对情绪状态有影响的区域。
S105:在面部图像上叠加目标特征图像得到用于展示情绪状态的表情模式图像。表情模式图像能够突出显示与情绪状态相关的面部特征。
本公开实施例提供的情绪状态展示方法通过第一网络模型得到情绪指数能够初步确认被测者情绪状态的异常风险,通过对面部图像进行特征提取和部分增强能够进一步获取与情绪状态相关的图像信息,通过在面部图像上叠加目标特征图像得到的表情模式图像能够直观地展示被测者的情绪状态。
在一些实施例中,判断情绪指数是否大于设定值;若是,则将面部图像输入第二网络模型得到第一预测向量;根据第一预测向量获取表情特征图像的第一梯度图;计算第一梯度图的平均梯度值作为表情特征图像的权重系数;若否,则将面部图像输入第三网络模型得到第二预测向量;根据第二预测向量获取表情特征图像的第二梯度图;计算第二梯度图的平均梯度值作为表情特征图像的权重系数。第二网络模型和第三网络模型为深度学习网络模型。
将面部图像输入包括多个卷积核的第四网络模型得到与卷积核一一对应的子特征图;根据表情在面部图像中的分布生成与卷积核一一对应的权重向量;根据与子特征图对应的权重向量对子特征图进行加权融合得到表情特征图像。
将表情特征图像中的每个像素点与权重系数相乘得到目标特征图像。
在一些实施例中,基于情绪刺激信号获取被测者的面部图像包括:获取与情绪状态对应的多个情绪。基于与情绪一一对应的情绪刺激信号获取被测者的面部图像。面部图像为多个。在面部图像上叠加与面部图像对应的多个目标特征图像得到用于展示情绪状态的表情模式图像。
图2是根据本公开一实施方式的情绪状态展示装置的结构框图。如图2所示,本公开实施例提供了一种情绪状态展示装置,包括:
获取模块21,用于基于情绪刺激信号获取被测者的面部图像;
第一数据处理模块22,用于将面部图像输入第一网络模型得到 情绪指数;
特征提取模块23,用于对面部图像进行特征提取得到表情特征图像;
第二数据处理模块24,用于根据情绪指数对表情特征图像的强弱进行调整得到目标特征图像;
第三数据处理模块25,用于在面部图像上叠加目标特征图像得到用于展示情绪状态的表情模式图像。
本公开实施例提供的情绪状态展示装置通过第一网络模型得到情绪指数能够初步确认被测者情绪状态的异常风险,通过对面部图像进行特征提取和部分增强能够进一步获取与情绪状态相关的图像信息,通过在面部图像上叠加目标特征图像得到的表情模式图像能够直观地展示被测者的情绪状态。
图3是根据本公开一实施方式的情绪状态展示系统。如图3所示,本公开实施例提供了一种情绪状态展示系统,包括:
情绪刺激模块31,用于将设定情绪的视频或音频提供给被测者观看。情绪刺激模块可选使用本机(包括并不限于手机移动端、平板电脑、台式机和笔记本电脑)的显示器或耳机向被测试者呈示正性情绪和负性情绪的视频。情绪刺激信号可选为视频或音频。
面部图像采集模块32,用于采集被测者观看视频时的面部图像。面部图像采集模块可选使用本机(包括并不限于手机移动端、平板电脑、台式机和笔记本电脑)相机(包括外置相机)同步采集被测者的面部图像,然后将面部图像保存到本机(包括并不限于手机移动端、 平板电脑、台式机和笔记本电脑)或发送到服务器(包括并不限于本地服务器和云端服务器)。
数据处理模块33,用于对面部图像进行处理,得到用于展示情绪状态的表情模式图像并发送给反馈模块。数据处理模块可选将面部图像输入第一网络模型得到情绪指数,对面部图像进行特征提取得到表情特征图像,根据情绪指数对表情特征图像中感兴趣的区域进行增强得到目标特征图像,在面部图像上叠加目标特征图像得到用于展示情绪状态的表情模式图像。数据处理模块可选在本机(包括并不限于手机移动端、平板电脑、台式机和笔记本电脑)或服务器(包括并不限于本地服务器或云端服务器)对面部图像进行处理。
反馈模块34,用于向被测者展示表情模式图像。反馈模块包括情绪异常判别模块和显示模块。显示模块可选包括移动端显示器、笔记本电脑显示器或台式机显示器。显示模块可选包括本机(包括并不限于手机移动端、平板电脑、台式机和笔记本电脑)显示器。数据处理模块还用于根据面部图像获取情绪指数并发送给情绪异常判别模块。情绪异常判别模块用于根据情绪指数生成情绪异常风险等级,并将风险等级发送给显示模块。显示模块用于显示表情模式图像,并根据情绪异常风险等级显示预警图像。情绪异常风险等级可选按照情绪指数的阈值区间由小到大分为一级、二级、三级和四级。一级的情绪状态为正常,一级对应的的预警图像可选为白色条块,二级的情绪状态为轻度异常,二级对应的预警图像可选为绿色条块,三级的情绪状态为中度异常,三级对应的预警图像可选为蓝色条块,四级的情绪状 态为重度异常,四级对应的预警图像可选为红色条块。
本公开实施例提供的系统能够判定情绪状态是否异常以及异常严重程度并直观地展示给被测试者,图像的展示形式更易于理解。本发明可以更客观地测量和评估被测试者的情绪状态,因此本发明对于实现自我健康管理,提高生活质量具有重要价值。
应当理解的是,本公开的上述具体实施方式仅仅用于示例性说明或解释本公开的原理,而不构成对本公开的限制。因此,在不偏离本公开的精神和范围的情况下所做的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。此外,本公开所附权利要求旨在涵盖落入所附权利要求范围和边界、或者这种范围和边界的等同形式内的全部变化和修改例。

Claims (10)

  1. 一种情绪状态展示方法,其中,包括:
    基于情绪刺激信号获取被测者的面部图像;
    将所述面部图像输入第一网络模型得到情绪指数;
    对所述面部图像进行特征提取得到表情特征图像;
    根据所述情绪指数对所述表情特征图像中感兴趣的区域进行增强得到目标特征图像;
    在所述面部图像上叠加所述目标特征图像得到用于展示情绪状态的表情模式图像。
  2. 根据权利要求1所述的情绪状态展示方法,根据所述情绪指数对所述表情特征图像中感兴趣的区域进行增强得到目标特征图像包括:
    根据所述情绪指数计算所述表情特征图像的权重系数;
    将所述表情特征图像中的每个像素点与所述权重系数相乘得到所述目标特征图像;
    其中,所述情绪指数越高所述权重系数越大。
  3. 根据权利要求2所述的情绪状态展示方法,根据所述情绪指数计算所述表情特征图像的权重系数包括:
    判断所述情绪指数是否大于设定值;
    若是,则将所述面部图像输入第二网络模型得到第一预测向量;
    根据所述第一预测向量获取所述表情特征图像的第一梯度图;
    计算所述第一梯度图的平均梯度值作为所述表情特征图像的权 重系数;
    若否,则将所述面部图像输入第三网络模型得到第二预测向量;
    根据所述第二预测向量获取所述表情特征图像的第二梯度图;
    计算所述第二梯度图的平均梯度值作为所述表情特征图像的权重系数。
  4. 根据权利要求1所述的情绪状态展示方法,对所述面部图像进行特征提取得到表情特征图像包括:
    将所述面部图像输入包括多个卷积核的第四网络模型得到与所述卷积核一一对应的子特征图;
    根据表情在面部图像中的分布生成与所述卷积核一一对应的权重向量;
    根据与所述子特征图对应的权重向量对所述子特征图进行加权融合得到所述表情特征图像。
  5. 根据权利要求1所述的情绪状态展示方法,将所述面部图像输入第一网络模型得到情绪指数之前,还包括:
    对所述面部图像进行下采样处理;
    对下采样处理后的面部图像进行降噪处理。
  6. 根据权利要求1-5任一项所述的情绪状态展示方法,所述面部图像为多个;
    在所述面部图像上叠加与所述面部图像对应的多个目标特征图像得到用于展示情绪状态的表情模式图像。
  7. 根据权利要求6所述的情绪状态展示方法,基于情绪刺激信 号获取被测者的面部图像包括:
    获取与所述情绪状态对应的多个情绪;
    基于与所述情绪一一对应的情绪刺激信号获取被测者的面部图像。
  8. 一种情绪状态展示装置,其中,包括:
    获取模块,用于基于情绪刺激信号获取被测者的面部图像;
    第一数据处理模块,用于将所述面部图像输入第一网络模型得到情绪指数;
    特征提取模块,用于对所述面部图像进行特征提取得到表情特征图像;
    第二数据处理模块,用于根据所述情绪指数对所述表情特征图像的强弱进行调整得到目标特征图像;
    第三数据处理模块,用于在所述面部图像上叠加所述目标特征图像得到用于展示情绪状态的表情模式图像。
  9. 一种情绪状态展示系统,其中,包括:
    情绪刺激模块,用于将设定情绪的视频或音频提供给被测者;
    面部图像采集模块,用于采集被测者观看所述视频或倾听所述音频时的面部图像;
    数据处理模块,用于对所述面部图像进行处理,得到用于展示情绪状态的表情模式图像并发送给反馈模块;
    反馈模块,用于向被测者展示所述表情模式图像。
  10. 根据权利要求9所述的情绪状态展示系统,所述反馈模块包 括情绪异常判别模块和显示模块;
    所述数据处理模块还用于根据所述面部图像获取情绪指数并发送给所述情绪异常判别模块;
    所述情绪异常判别模块用于根据所述情绪指数生成情绪异常风险等级,并将所述风险等级发送给所述显示模块;
    所述显示模块用于显示所述表情模式图像,并根据所述情绪异常风险等级显示预警图像。
PCT/CN2021/133513 2021-10-11 2021-11-26 情绪状态展示方法、装置及系统 WO2023060720A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111178893.5 2021-10-11
CN202111178893.5A CN113610067B (zh) 2021-10-11 2021-10-11 情绪状态展示方法、装置及系统

Publications (1)

Publication Number Publication Date
WO2023060720A1 true WO2023060720A1 (zh) 2023-04-20

Family

ID=78343487

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/133513 WO2023060720A1 (zh) 2021-10-11 2021-11-26 情绪状态展示方法、装置及系统

Country Status (2)

Country Link
CN (1) CN113610067B (zh)
WO (1) WO2023060720A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113610067B (zh) * 2021-10-11 2021-12-28 北京工业大学 情绪状态展示方法、装置及系统

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010056228A1 (en) * 2000-06-27 2001-12-27 Drdc Limited Diagnosis system, diagnosis data producing method, information processing device, terminal device and recording medium used in the diagnosis data producing method
CN105635574A (zh) * 2015-12-29 2016-06-01 小米科技有限责任公司 图像的处理方法和装置
CN106060572A (zh) * 2016-06-08 2016-10-26 乐视控股(北京)有限公司 视频播放方法及装置
CN106341608A (zh) * 2016-10-28 2017-01-18 维沃移动通信有限公司 一种基于情绪的拍摄方法及移动终端
TWM573494U (zh) * 2018-10-02 2019-01-21 眾匯智能健康股份有限公司 根據臉部表情提供對應服務的系統
CN110147822A (zh) * 2019-04-16 2019-08-20 北京师范大学 一种基于人脸动作单元检测的情绪指数计算方法
CN111598133A (zh) * 2020-04-22 2020-08-28 腾讯科技(深圳)有限公司 基于人工智能的图像显示方法、装置、设备及介质
CN112465909A (zh) * 2020-12-07 2021-03-09 南开大学 基于卷积神经网络的类激活映射目标定位方法及系统
CN113225590A (zh) * 2021-05-06 2021-08-06 深圳思谋信息科技有限公司 视频超分增强方法、装置、计算机设备和存储介质
CN113610067A (zh) * 2021-10-11 2021-11-05 北京工业大学 情绪状态展示方法、装置及系统
CN113610853A (zh) * 2021-10-11 2021-11-05 北京工业大学 基于静息态脑功能图像的情绪状态展示方法、装置及系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6186145B1 (en) * 1994-05-23 2001-02-13 Health Hero Network, Inc. Method for diagnosis and treatment of psychological and emotional conditions using a microprocessor-based virtual reality simulator
CN105559802B (zh) * 2015-07-29 2018-11-02 北京工业大学 基于注意和情感信息融合的抑郁诊断系统及数据处理方法

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010056228A1 (en) * 2000-06-27 2001-12-27 Drdc Limited Diagnosis system, diagnosis data producing method, information processing device, terminal device and recording medium used in the diagnosis data producing method
CN105635574A (zh) * 2015-12-29 2016-06-01 小米科技有限责任公司 图像的处理方法和装置
CN106060572A (zh) * 2016-06-08 2016-10-26 乐视控股(北京)有限公司 视频播放方法及装置
CN106341608A (zh) * 2016-10-28 2017-01-18 维沃移动通信有限公司 一种基于情绪的拍摄方法及移动终端
TWM573494U (zh) * 2018-10-02 2019-01-21 眾匯智能健康股份有限公司 根據臉部表情提供對應服務的系統
CN110147822A (zh) * 2019-04-16 2019-08-20 北京师范大学 一种基于人脸动作单元检测的情绪指数计算方法
CN111598133A (zh) * 2020-04-22 2020-08-28 腾讯科技(深圳)有限公司 基于人工智能的图像显示方法、装置、设备及介质
CN112465909A (zh) * 2020-12-07 2021-03-09 南开大学 基于卷积神经网络的类激活映射目标定位方法及系统
CN113225590A (zh) * 2021-05-06 2021-08-06 深圳思谋信息科技有限公司 视频超分增强方法、装置、计算机设备和存储介质
CN113610067A (zh) * 2021-10-11 2021-11-05 北京工业大学 情绪状态展示方法、装置及系统
CN113610853A (zh) * 2021-10-11 2021-11-05 北京工业大学 基于静息态脑功能图像的情绪状态展示方法、装置及系统

Also Published As

Publication number Publication date
CN113610067A (zh) 2021-11-05
CN113610067B (zh) 2021-12-28

Similar Documents

Publication Publication Date Title
US10004463B2 (en) Systems, methods, and computer readable media for using descriptors to identify when a subject is likely to have a dysmorphic feature
Tarnowski et al. Eye‐tracking analysis for emotion recognition
JP5926210B2 (ja) 自閉症診断支援システム及び自閉症診断支援装置
KR20200004841A (ko) 셀피를 촬영하도록 사용자를 안내하기 위한 시스템 및 방법
WO2020121308A1 (en) Systems and methods for diagnosing a stroke condition
WO2021068781A1 (zh) 一种疲劳状态识别方法、装置和设备
US11006834B2 (en) Information processing device and information processing method
WO2023060721A1 (zh) 基于静息态脑功能图像的情绪状态展示方法、装置及系统
CN111887867A (zh) 基于表情识别与心理学测试生成性格的分析方法及系统
CN115334957A (zh) 用于对瞳孔心理感觉反应进行光学评估的系统和方法
WO2023060720A1 (zh) 情绪状态展示方法、装置及系统
Kroupi et al. Predicting subjective sensation of reality during multimedia consumption based on EEG and peripheral physiological signals
CN113647950A (zh) 心理情绪检测方法及系统
Migliorelli et al. A store-and-forward cloud-based telemonitoring system for automatic assessing dysarthria evolution in neurological diseases from video-recording analysis
Adibuzzaman et al. Assessment of pain using facial pictures taken with a smartphone
CN111048202A (zh) 一种智能化中医诊断系统及其方法
CN111341444A (zh) 智能绘画评分方法及系统
Liu et al. Multimodal behavioral dataset of depressive symptoms in chinese college students–preliminary study
Shimada et al. Real-time system for horizontal asymmetry analysis on facial expression and its visualization
Gu et al. AI-Driven Depression Detection Algorithms from Visual and Audio Cues
WO2023060719A1 (zh) 基于瞳孔波计算情绪指标的方法、装置及系统
Chen et al. A High-Quality Landmarked Infrared Eye Video Dataset (IREye4Task): Eye Behaviors, Insights and Benchmarks for Wearable Mental State Analysis
KR20160022578A (ko) 뇌파검사 장치
Ilyas et al. Inferring user facial affect in work-like settings
Yuan et al. Combining Informative Regions and Clips for Detecting Depression from Facial Expressions