CN111079465A - Emotional state comprehensive judgment method based on three-dimensional imaging analysis - Google Patents

Emotional state comprehensive judgment method based on three-dimensional imaging analysis Download PDF

Info

Publication number
CN111079465A
CN111079465A CN201811212972.1A CN201811212972A CN111079465A CN 111079465 A CN111079465 A CN 111079465A CN 201811212972 A CN201811212972 A CN 201811212972A CN 111079465 A CN111079465 A CN 111079465A
Authority
CN
China
Prior art keywords
emotion
image
comprehensive judgment
dimensional
facial expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811212972.1A
Other languages
Chinese (zh)
Inventor
王春雷
尉迟学彪
毛鹏轩
段志成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Rostec Technology Co ltd
Original Assignee
Beijing Rostec Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Rostec Technology Co ltd filed Critical Beijing Rostec Technology Co ltd
Priority to CN201811212972.1A priority Critical patent/CN111079465A/en
Publication of CN111079465A publication Critical patent/CN111079465A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses an emotion state comprehensive judgment method based on three-dimensional imaging analysis, which can extract two modal data of facial expression and human body physiological vibration from a three-dimensional face image by analyzing the three-dimensional face image so as to realize high-precision and comprehensive judgment on the emotion state of a user. The method specifically comprises four functional modules of 3D image pushing, facial expression recognition, physiological vibration emotion recognition and emotion state comprehensive judgment. The 3D image pushing module is responsible for receiving and inputting three-dimensional video image data of a tested person; the facial expression recognition module is used for performing emotion recognition by analyzing and recognizing the facial expression of the tested person; the physiological vibration emotion recognition module is used for recognizing emotion by collecting and analyzing facial muscle vibration signals of the tested person; the emotion state comprehensive judgment module is used for comprehensively judging the emotion state of the tested person based on respective emotion recognition results of the facial expression recognition module and the physiological vibration emotion recognition module.

Description

Emotional state comprehensive judgment method based on three-dimensional imaging analysis
Technical Field
The invention relates to the field of emotion recognition, in particular to a non-contact emotion state comprehensive judgment method based on three-dimensional imaging analysis.
Background
In recent years, with the rapid development of artificial intelligence technology, emotion recognition is becoming widely applied and deployed in various industries as a segment of the technology field. Particularly, in special industries such as security, frontier inspection, customs, army and the like, great business requirements are existed on emotion recognition technology.
Common emotion recognition methods fall into two main categories: non-physiological signal based identification and physiological signal based identification. Specifically, the emotion recognition method based on a non-physiological signal mainly includes recognition of facial expressions and voice tones, and the like. The facial expression recognition method is characterized in that different emotions are recognized according to the corresponding relation between expressions and emotions, people can generate specific facial muscle movement and expression modes under a specific emotion state, for example, when people feel happy, the mouth corners are upwarped, and eyes can generate annular wrinkles; the eye may be frown when angry, may be wide open, etc., and may be recognized by means of image analysis.
The emotion recognition method based on the physiological signals mainly comprises emotion recognition based on an autonomic nervous system and emotion recognition based on a central nervous system. The identification method based on the autonomic nervous system is used for identifying corresponding emotional states by measuring physiological signals such as muscle vibration, heart rate, skin impedance, respiration and the like; the identification method based on the central nervous system is used for identifying corresponding emotions by analyzing different signals emitted by the brain under different emotional states. In the emotion recognition modes based on physiological signals, except that muscle vibration signals can be collected in an image analysis mode, other signals all need to be collected by the tested individual wearing corresponding signal collection equipment, so that the difficulty of information collection is high, and the practical application scene is very limited.
Currently, analyzing facial expressions or facial muscle vibration signals based on image analysis is the emotion recognition technology which is the most widely applied scene at present. Currently, most of the current emotion recognition technologies based on image analysis are performed by acquiring and analyzing two-dimensional plane images. However, since a human face image is a substantially stereo model, emotion recognition by analyzing a two-dimensional image of a human face is easily affected by factors such as a posture, illumination, and an angle, and depth information cannot be processed well, so that there has been a bottleneck limit in recognition accuracy.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a comprehensive emotional state judgment method based on three-dimensional imaging analysis. Compared with the traditional emotion judgment technology based on the two-dimensional plane image, the emotion recognition method based on the three-dimensional plane image can realize emotion recognition based on the three-dimensional stereo image, so that the accuracy of emotion recognition can be effectively improved.
The emotion state comprehensive judgment method based on three-dimensional imaging analysis provided by the invention has the basic functional flow as shown in the attached figure 1, wherein:
the 3D image pushing module is responsible for receiving and inputting video images of the tested person collected and recorded based on the 3D structured light camera;
the facial expression recognition module is used for judging the emotion by analyzing and recognizing the facial expression of the tested person;
the physiological vibration emotion recognition module is used for judging emotion by collecting and analyzing facial muscle vibration signals of the tested person;
the emotion state comprehensive judgment module is used for judging the emotion of the tested person based on the emotion judgment results of the facial expression recognition module and the physiological vibration emotion recognition module.
Compared with the prior art, the invention has the beneficial effects that: the invention provides an emotion state judgment method based on a three-dimensional image analysis technology, which can realize comprehensive judgment of emotion states through two types of modal data of facial expressions and physiological vibration, so that the emotion recognition accuracy rate is higher.
Drawings
Fig. 1 is a flow chart showing the detailed function of the method.
Fig. 2 is a diagram showing a specific example of a facial expression recognition module in the method.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Firstly, the three-dimensional video image data of the face of a tested person is received and input by the 3D image pushing module, and the data are respectively pushed to the facial expression recognition module and the physiological vibration emotion recognition module. The data is based on 3D structured light camera shots, the video frame rate is 25 frames/second, and contains depth information about the face. The specific model of the 3D structured light camera is not limited herein, and those skilled in the art can select the model according to the practical application of the present solution.
The facial expression recognition module analyzes the three-dimensional video image data pushed by the 3D image pushing module according to the corresponding relation between the expressions and the emotions (namely, people can generate a specific expression mode under a specific emotion state, such as upwarping of mouth angles and annular wrinkles of eyes when the mood is happy, frowning of eyebrows when the mood is angry, opening of eyes and the like), six basic emotions including happiness, sadness, surprise, anger, disgust and fear are recognized, various emotion states are quantitatively scored according to specific service scenes, and finally an emotion state value M is obtained1(0<M1< 1) and pushing to the emotional state comprehensive judgment module. The specific implementation process of the module is shown in fig. 2. Firstly, a 3D image input component receives a 3D image pushed by a 3D image pushing module and sends the 3D image to a face detection component; the face detection component detects the face region by adopting a statistical-based method (for example, a face detection algorithm based on the binary wavelet transform), and sends a detection result to the image preprocessing component; the image preprocessing component completes geometric normalization and histogram equalization operations on the face region image; the expression feature extraction component extracts expression features by adopting a local texture feature extraction algorithm (for example, an LBP local binary pattern), and sends the extracted expression features to an SVM classifier; the SVM classifier judges the input expression characteristics based on an SVM training model, and outputs a judgment result to a emotion state value calculation component, wherein the SVM training model can complete the model construction and training process based on a 3D dynamic human face expression library 'BU-4 DFE' developed by university of Bominghan, UK; the emotional state value calculating component is used for calculating the emotional state value according to the preset emotional state and the emotional state valueThe mapping relationship of (2) externally outputs a corresponding emotional state value, and the mapping relationship is not particularly limited by the invention.
The physiological vibration emotion recognition module collects and analyzes facial muscle vibration signals of the tested person in the three-dimensional video image data pushed by the 3D image pushing module, extracts relevant physiological parameters such as muscle vibration amplitude and frequency and the like from the facial muscle vibration signals, calculates emotion states, quantificationally scores various emotion states according to specific service scenes, and finally obtains an emotion state value M2(0<M2< 1) and pushing to the emotional state comprehensive judgment module. Wherein the muscle vibration amplitude parameter is represented by formula
Figure BDA0001832859930000041
Calculated, the muscle vibration frequency parameter is obtained by the formula
Figure BDA0001832859930000042
Calculated, wherein x, y and z represent the three-dimensional space position information of the point in the image, n represents the total frame number of the image, Vx,y,z,iRepresenting the magnitude of the displacement, Δ, of the point in the ith frameiRepresenting the difference between different frames at the ith point of the image.
The emotion state comprehensive judgment module receives emotion state values M respectively pushed by the physiological vibration emotion recognition module and the facial expression recognition module1And M2And according to the formula M ═ w × M1+(1-w)*M2And calculating to obtain the comprehensive judgment result of the emotional state of the tested person, wherein the weight w (w is more than 0 and less than 1) can be specifically set according to different service scenes.

Claims (7)

1. A comprehensive judgment method for emotional states based on three-dimensional imaging analysis is characterized by comprising the following steps: two types of modal data of facial expression and human body physiological vibration can be extracted from the three-dimensional facial image by analyzing the three-dimensional facial image, so that high-precision and comprehensive judgment on the emotional state of the user is realized; the method specifically comprises four functional modules of 3D image pushing, facial expression recognition, physiological vibration emotion recognition and emotion state comprehensive judgment.
2. The method of claim 1, wherein: the 3D image pushing module is used for receiving and inputting three-dimensional face video data of a tested person; the three-dimensional face video data is shot based on a 3D structured light camera, the video frame rate is 25 frames/second, and the three-dimensional face video data contains depth information about a face.
3. The method of claim 1, wherein: the facial expression recognition module analyzes three-dimensional face video data of a tested person, recognizes six basic emotions of happiness, sadness, surprise, anger, disgust and fear by utilizing an SVM emotion classifier, and quantifies an emotion state value M according to a specific service scene1:0<M1<1。
4. The method of claim 1, wherein: the physiological vibration emotion recognition module acquires and analyzes two parameters of amplitude and frequency of facial muscle vibration of the tested person by analyzing three-dimensional face video data of the tested person, and an emotion state value M is calculated by the parameters2:0<M2<1。
5. The method of claim 1, wherein: the comprehensive judgment module is used for judging the emotion state value of the facial expression recognition module and the emotion value of the physiological vibration recognition module according to M-w-M1+(1-w)*M2And calculating to obtain the comprehensive judgment result of the emotional state of the tested person, wherein the weight w (w is more than 0 and less than 1) can be specifically set according to different service scenes.
6. The method of claim 3, wherein: the facial expression recognition module specifically comprises six components of 3D image input, face detection, image preprocessing, expression feature extraction, SVM classifier and emotion state value calculation; the face detection component is realized by adopting a statistical-based method, and the expression feature extraction component is realized by adopting a local texture feature extraction algorithm.
7. The method of claim 4, wherein: the amplitude parameter is represented by formula
Figure FDA0001832859920000021
Calculated, the frequency parameter is obtained by formula
Figure FDA0001832859920000022
Calculating to obtain; wherein, the Vx,y,z,iRepresenting the displacement amplitude of the point in the ith frame, x, y and z representing three-dimensional space position information of the point in the image, n representing the total frame number of the image, and deltaiRepresenting the difference between different frames at the ith point of the image.
CN201811212972.1A 2018-10-18 2018-10-18 Emotional state comprehensive judgment method based on three-dimensional imaging analysis Pending CN111079465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811212972.1A CN111079465A (en) 2018-10-18 2018-10-18 Emotional state comprehensive judgment method based on three-dimensional imaging analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811212972.1A CN111079465A (en) 2018-10-18 2018-10-18 Emotional state comprehensive judgment method based on three-dimensional imaging analysis

Publications (1)

Publication Number Publication Date
CN111079465A true CN111079465A (en) 2020-04-28

Family

ID=70308420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811212972.1A Pending CN111079465A (en) 2018-10-18 2018-10-18 Emotional state comprehensive judgment method based on three-dimensional imaging analysis

Country Status (1)

Country Link
CN (1) CN111079465A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563465A (en) * 2020-05-12 2020-08-21 淮北师范大学 Animal behavioristics automatic analysis system
CN112990008A (en) * 2021-03-13 2021-06-18 山东海量信息技术研究院 Emotion recognition method and system based on three-dimensional characteristic diagram and convolutional neural network
WO2022113380A1 (en) * 2020-11-26 2022-06-02 パナソニックIpマネジメント株式会社 Emotion assessment system, emotion assessment method, and program
CN115797966A (en) * 2022-10-27 2023-03-14 杭州智诺科技股份有限公司 Method, system, device and medium for collecting and identifying emotion data

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106970743A (en) * 2017-03-27 2017-07-21 宇龙计算机通信科技(深圳)有限公司 A kind of icon sort method, device and mobile terminal
CN107220591A (en) * 2017-04-28 2017-09-29 哈尔滨工业大学深圳研究生院 Multi-modal intelligent mood sensing system
CN107972028A (en) * 2017-07-28 2018-05-01 北京物灵智能科技有限公司 Man-machine interaction method, device and electronic equipment
CN207367229U (en) * 2017-08-25 2018-05-15 太原康祺科技发展有限公司 Applied to the potential emotional intelligence analysis system device detected before particular job post

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106970743A (en) * 2017-03-27 2017-07-21 宇龙计算机通信科技(深圳)有限公司 A kind of icon sort method, device and mobile terminal
CN107220591A (en) * 2017-04-28 2017-09-29 哈尔滨工业大学深圳研究生院 Multi-modal intelligent mood sensing system
CN107972028A (en) * 2017-07-28 2018-05-01 北京物灵智能科技有限公司 Man-machine interaction method, device and electronic equipment
CN207367229U (en) * 2017-08-25 2018-05-15 太原康祺科技发展有限公司 Applied to the potential emotional intelligence analysis system device detected before particular job post

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563465A (en) * 2020-05-12 2020-08-21 淮北师范大学 Animal behavioristics automatic analysis system
CN111563465B (en) * 2020-05-12 2023-02-07 淮北师范大学 Animal behaviourology automatic analysis system
WO2022113380A1 (en) * 2020-11-26 2022-06-02 パナソニックIpマネジメント株式会社 Emotion assessment system, emotion assessment method, and program
CN112990008A (en) * 2021-03-13 2021-06-18 山东海量信息技术研究院 Emotion recognition method and system based on three-dimensional characteristic diagram and convolutional neural network
CN112990008B (en) * 2021-03-13 2022-06-17 山东海量信息技术研究院 Emotion recognition method and system based on three-dimensional characteristic diagram and convolutional neural network
CN115797966A (en) * 2022-10-27 2023-03-14 杭州智诺科技股份有限公司 Method, system, device and medium for collecting and identifying emotion data

Similar Documents

Publication Publication Date Title
KR102147052B1 (en) Emotional recognition system and method based on face images
Perveen et al. Spontaneous expression recognition using universal attribute model
Ball et al. Unsupervised clustering of people from'skeleton'data
Dubey et al. Automatic emotion recognition using facial expression: a review
Youssif et al. Automatic facial expression recognition system based on geometric and appearance features
CN111079465A (en) Emotional state comprehensive judgment method based on three-dimensional imaging analysis
Varghese et al. Overview on emotion recognition system
Cohn Advances in behavioral science using automated facial image analysis and synthesis [social sciences]
Murtaza et al. Analysis of face recognition under varying facial expression: a survey.
CN113158727A (en) Bimodal fusion emotion recognition method based on video and voice information
CN110472512B (en) Face state recognition method and device based on deep learning
CN110796101A (en) Face recognition method and system of embedded platform
Khatri et al. Facial expression recognition: A survey
Zhang et al. Emotion detection using Kinect 3D facial points
KR100988323B1 (en) Method and apparatus of recognizing detailed facial expression using facial expression information amplification
JP2008009728A (en) Expression recognition method and expression recognition device
Sobia et al. Facial expression recognition using PCA based interface for wheelchair
Jacintha et al. A review on facial emotion recognition techniques
Taskirar et al. Face recognition using dynamic features extracted from smile videos
Jazouli et al. A $ P recognizer for automatic facial emotion recognition using Kinect sensor
Gupta et al. A human emotion recognition system using supervised self-organising maps
CN112613430B (en) Gait recognition method based on deep migration learning
Kumar et al. Emotion recognition using anatomical information in facial expressions
Mizna et al. Blue eyes technology
Wei et al. 3D facial expression recognition based on Kinect

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200428

WD01 Invention patent application deemed withdrawn after publication