CN107871113B - Emotion hybrid recognition detection method and device - Google Patents

Emotion hybrid recognition detection method and device Download PDF

Info

Publication number
CN107871113B
CN107871113B CN201610863701.7A CN201610863701A CN107871113B CN 107871113 B CN107871113 B CN 107871113B CN 201610863701 A CN201610863701 A CN 201610863701A CN 107871113 B CN107871113 B CN 107871113B
Authority
CN
China
Prior art keywords
emotion
significance
value
recognition
judgment value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610863701.7A
Other languages
Chinese (zh)
Other versions
CN107871113A (en
Inventor
刘国满
盛敬
黄志开
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Institute of Technology
Original Assignee
Nanchang Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Institute of Technology filed Critical Nanchang Institute of Technology
Priority to CN201610863701.7A priority Critical patent/CN107871113B/en
Publication of CN107871113A publication Critical patent/CN107871113A/en
Application granted granted Critical
Publication of CN107871113B publication Critical patent/CN107871113B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Abstract

The invention briefly describes a method and a device for emotion mixed recognition detection, which specifically comprises the following steps: the first emotion unit carries out image recognition according to the external expression and behavior, and calculates a first judgment value and a first significance; performing physiological identification by the second emotion unit according to the physiological signal, and calculating a second judgment value and a second significance; the third emotion unit measures and calculates related parameters according to the voice signal, and a third judgment value and a third significance are calculated; calculating a fourth judgment value and a fourth significance by a fourth emotion unit according to statistics of the frequency and frequency of emotion keywords in the text characters; and finally, calculating a mixed judgment value by the mixed emotion unit according to the judgment values and the significance calculated above so as to judge and identify the emotional state of the tested object.

Description

Emotion hybrid recognition detection method and device
Technical Field
The invention relates to the field of intelligent control, in particular to a method and a device for emotion mixed recognition and detection.
Background
With the advent of emotional smart robots or devices, it is desirable that these devices be able to detect objects based on such things as: the human facial expression, action behavior, physiological signal, voice signal, text character and other characteristic information are matched with the characteristic library to judge the emotional state of the detected object. However, in the current emotion recognition method, a single feature or a plurality of weighted values set by the recognition features are mainly selected to be mixed to calculate an emotion value, so as to judge the current emotion state of the detected object. However, the method adopts the same calculation and judgment method to judge the emotional state aiming at different objects, and is a static emotion recognition method. Because of the different objects, the degree of significance of each feature change is different, such as: for persons with rich facial expressions, the emotional state change can be easily judged by expression recognition, so the proportion of external expression recognition should be increased for the persons. If the fixed emotion calculation method is adopted for all objects, the accuracy of emotion judgment can be influenced, so that a corresponding emotion calculation method and a corresponding emotion model are automatically formed for different objects, and great necessity is brought to improvement of the accuracy of emotion judgment.
Disclosure of Invention
In view of the above, the main objective of the present invention is to provide a novel emotion mixed recognition detection method and apparatus, so as to solve the influence of the currently adopted fixed emotion recognition detection method on the judgment correctness.
In order to achieve the purpose, the technical scheme of the invention is realized in such a way;
the invention provides a method for emotion mixed recognition detection, which comprises the following steps:
carrying out image emotion recognition on external expressions and behaviors of the recognized object to obtain a first judgment value and a first significance;
performing physiological emotion recognition on the physiological signal of the recognized object to obtain a second judgment value and a second significance;
carrying out voice emotion recognition on the voice signal of the recognized object to obtain a third judgment value and a third significance;
performing text emotion recognition on the text characters of the recognized object to obtain a fourth judgment value and a fourth significance;
and adding products obtained by multiplying the judgment values corresponding to the emotion recognition methods by the degrees of significance respectively to recognize the mixed judgment value of the object.
In the above scheme, the image emotion recognition is performed on the external expressions and behaviors of the recognized object to obtain a first judgment value and a first significance, specifically: for recognizing external expressions of an object, such as: image processing and feature extraction are carried out on the facial expressions and the behavior actions, and the facial expressions and the behavior actions are matched with an expression feature library to calculate the correct judgment rate of various emotional states, namely the first judgment value; the first degree of significance is used for representing the degree of expression change in various emotional states and normal states, and the higher the degree of significance is, the more the feature can reflect the emotional state feature.
In the above solution, in the physiological emotion recognition on the physiological signal of the recognized object, a second determination value and a second significance are obtained, and the method further includes: various physiological parameters of the identified subject are detected mainly by various measurement detection means, such as: pulse, electrocardiogram and the like to identify the accuracy of the current emotional state of the object, namely the second judgment value; the second degree of significance is used to indicate the degree of change in the physiological parameter between the emotional state and the normal state.
In the above scheme, the performing speech emotion recognition on the speech signal of the recognized object to obtain a third determination value and a third degree of significance specifically includes: mainly through the information and parameters of the emotion-related aspect contained in the recognized object voice signal, such as: judging the accuracy of the current emotional state according to the variation of the pronunciation duration, the pitch frequency average value, the maximum value and the like of the long sentence, wherein the accuracy is a third judgment value; the third degree of significance is used to represent the degree of change of the speech signal between the current emotional state and the normal state.
In the above scheme, the text emotion recognition is performed on the text characters of the recognized object to obtain a fourth judgment value and a fourth significance, specifically: judging the accuracy of the emotional state of the current recognized object mainly by the frequency of the emotional keywords appearing in the written characters or texts of the recognized object, namely judging the accuracy of the emotional state of the current recognized object as a fourth judgment value; the fourth degree of significance is used for representing the change of the times of the emotional keywords appearing in the characters or texts in the current emotional state and the normal state.
In the above solution, the above method of calculating the mixed judgment value of the object by adding the products obtained by multiplying the judgment values corresponding to the emotion recognition methods by the degrees of significance respectively includes: and taking the multiplication result of the judgment value of each emotion recognition method and the corresponding significance as the weight value of the emotion recognition method, and adding the weight values of the emotion recognition methods to calculate a mixed judgment value so as to make correct judgment on the emotion state.
In the above solution, the emotion mixed recognition detection method not only includes the four types of emotion recognition methods listed above, but also includes other types of emotion recognition methods, and the four types of emotion recognition methods are in parallel.
The invention also provides a device for emotion mixed recognition and detection, which comprises the following components:
the first emotion unit is used for carrying out image emotion recognition mainly according to the external expression and behavior of the recognized object to calculate a first judgment value and a first significance of the emotion recognition;
the second emotion unit is mainly used for carrying out physiological emotion recognition according to the physiological signal of the recognized object to calculate a second judgment value and a second significance of the emotion recognition;
a third emotion unit which mainly carries out voice emotion recognition according to the voice signal related emotion parameters of the recognized object to calculate a third judgment value and a third significance of emotion recognition;
a fourth emotion unit which mainly carries out text emotion recognition according to the frequency of occurrence of emotion keywords in the characters or texts of the recognized objects to calculate a fourth judgment value and a fourth significance of the emotion recognition;
and the mixed emotion unit is mainly used for adding the judgment values respectively calculated by the four emotion units and the multiplication results of the degrees of significance to calculate a mixed judgment value so as to make correct judgment on the emotion state.
In the device, the relation among the first emotion unit, the second emotion unit, the third emotion unit and the fourth emotion unit in the emotion mixed recognition and detection device is parallel, and the sequence is not required.
Drawings
FIG. 1 is a schematic process diagram of an emotion mixture recognition detection method according to the present invention;
FIG. 2 is a schematic diagram of a module structure of an emotion mixture recognition detection apparatus according to the present invention;
Detailed Description
The invention relates to a method and a device for emotion mixed recognition and detection, which mainly set a dynamic emotion recognition and detection method and algorithm according to different detection objects and different expression change degrees of various characteristics under various emotion states, thereby improving the accuracy and correctness of emotion recognition, and the specific operation process is shown in figure 1;
1, the emotion recognition detection device collects and processes characteristic information of external expressions, physiological signals, voice signals, text characters and the like of a detected object in various emotional states;
2, matching and identifying the external expression characteristic information of the detected object by a first emotion unit in the device, calculating a first judgment value of the current emotion state, and calculating a corresponding first significance value according to the external expression characteristic change of the detected object in the current emotion state and the normal state;
the physiological signal characteristics of the detected object are detected and measured by a second emotion unit in the device, such as: calculating a second judgment value of the current emotional state by detecting physiological signals such as electrocardiogram and pulse of the person, and calculating a corresponding second significance value according to the change of the physiological signals of the detected object in the current emotional state and the normal state;
4, calculating and counting related emotion parameters of the voice signal of the detected object by a third emotion unit in the device, such as: calculating a third judgment value of the current emotional state according to parameters such as the long sentence duration, the pitch frequency average value and the maximum value in the voice signal, and calculating a corresponding third significance value according to the change size of the relevant emotional parameters in the voice signal in the current emotional state and the normal state;
5, calculating a fourth judgment value of the current emotional state by a fourth emotion unit according to the frequency and the frequency of the occurrence of the emotional keywords in the text written or spoken by the detected object, and calculating a corresponding fourth significance value according to the magnitude of parameter changes such as the frequency and the frequency of the emotional keywords in the text of the detected object in the current emotional state and the normal state;
6, for the four emotion units, the precedence relationship of the processed data does not exist; in addition, other types of emotion recognition methods and emotion units may also be included;
and 7, multiplying the corresponding judgment values and the degrees of significance respectively calculated by the four emotion units by the mixed emotion unit, adding the multiplication results to calculate a mixed judgment value under the emotion state, and judging the current emotion state according to the mixed judgment value.

Claims (8)

1. A mixed emotion recognition detection method is characterized by comprising the following steps:
performing image processing and feature extraction on the facial expressions and behavior actions of the identified objects, and matching the facial expressions and behavior actions with a feature library to obtain a first judgment value and a first significance; the first judgment value is the accuracy rate of various emotional states according to the facial expressions and the behavior actions; the first degree of significance is used for representing the degree of expression change in various emotional states and normal states, and the higher the degree of significance is, the more the feature can reflect the emotional state feature;
detecting and extracting physiological signals of pulse and electrocardiogram of the identified object to obtain a second judgment value and a second significance;
detecting and calculating variation signals of the pronunciation duration, the pitch frequency average value and the maximum value of the same long sentence in the voice signal of the recognized object to obtain a third judgment value and a third significance;
detecting and calculating the frequency or frequency of the emotion keywords appearing in the written characters or texts of the identified objects to obtain a fourth judgment value and a fourth significance;
and adding products obtained by multiplying the judgment values corresponding to the four emotion recognition methods by the degrees of significance to calculate a mixed judgment value of the object.
2. The emotion-mixing recognition and detection method according to claim 1, wherein in the step of detecting and extracting the physiological signals of the pulse and electrocardiogram of the recognized subject to obtain a second determination value and a second significance, the second determination value is a correct rate at which the emotional state can be recognized by detecting the physiological signals of the pulse and electrocardiogram; the second degree of significance is used to represent the degree of change in the physiological parameter between various emotional states and the normal state.
3. The emotion mixed recognition detection method according to claim 1, wherein in the third determination value and the third significance obtained by detecting and calculating the variation signals of the same-length phrase utterance duration, the pitch frequency average value, and the maximum value in the speech signal of the recognition target, the third determination value is a value for determining the accuracy of the emotional state of the recognition target by detecting the variation signals of the same-length phrase utterance duration, the pitch frequency average value, and the maximum value in the speech signal of the recognition target; the third degree of significance is used for representing the degree of change of the voice signal in the current emotional state and the normal state.
4. The emotion mixed recognition detection method according to claim 1, wherein in the fourth determination value and the fourth significance obtained by detecting and calculating the number of times or frequency of the emotion keywords appearing in the written text or text of the recognized object, the fourth determination value is a value that can recognize the accuracy of the emotional state by recognizing the number of times or frequency of the emotion keywords appearing in the written text or text of the recognized object; the fourth degree of significance is used for representing the frequency or frequency change of the emotional keywords in the characters or texts in the current emotional state and the normal state.
5. The emotion mixture recognition detection method according to claim 1, wherein the determination values corresponding to the four emotion recognition methods and the products obtained by multiplying the degrees of significance are added to calculate the mixture determination value of the object, and the method specifically comprises: and taking the multiplication result of the judgment value of each emotion recognition method and the corresponding significance as the weight value of the emotion recognition method, and adding the weight values of the multiple emotion recognition methods to calculate a mixed judgment value so as to make correct judgment on the emotion state.
6. The mixed emotion recognition and detection method of claim 1, wherein the four emotion recognition methods in the mixed emotion recognition and detection method are in a parallel relationship without any sequence.
7. An emotion mixture recognition detection apparatus, comprising:
the first emotion unit is used for carrying out image processing and feature extraction on the facial expressions and the behavior actions of the identified objects, matching the facial expressions and the behavior actions with a feature library and calculating a first judgment value and a first significance; the first judgment value is the accuracy rate of judging various emotional states according to the facial expressions and the behavior actions, the first significance is used for representing the degree of expression change in various emotional states and in a normal state, and the higher the significance is, the more the feature can reflect the emotional state feature;
the second emotion unit is used for detecting and extracting the pulse of the identified object and the physiological signals of the electrocardiogram and calculating a second judgment value and a second significance;
the third emotion unit is used for detecting and calculating variation signals of the pronunciation duration time, the pitch frequency average value and the maximum value of the same long sentence in the voice signal of the recognized object to obtain a third judgment value and a third significance;
the fourth emotion unit is used for detecting and calculating the frequency or frequency of emotion keywords appearing in the written characters or texts of the identified objects to obtain a fourth judgment value and a fourth significance;
and the mixed emotion unit is used for adding the judgment values respectively calculated by the four emotion units and the multiplication results of the degrees of significance to calculate a mixed judgment value so as to make correct judgment on the emotion state.
8. The apparatus for emotion-mixing recognition detection according to claim 7, wherein the relationship among the first emotion unit, the second emotion unit, the third emotion unit and the fourth emotion unit is a parallel relationship without any sequence.
CN201610863701.7A 2016-09-22 2016-09-22 Emotion hybrid recognition detection method and device Active CN107871113B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610863701.7A CN107871113B (en) 2016-09-22 2016-09-22 Emotion hybrid recognition detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610863701.7A CN107871113B (en) 2016-09-22 2016-09-22 Emotion hybrid recognition detection method and device

Publications (2)

Publication Number Publication Date
CN107871113A CN107871113A (en) 2018-04-03
CN107871113B true CN107871113B (en) 2021-06-25

Family

ID=61762009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610863701.7A Active CN107871113B (en) 2016-09-22 2016-09-22 Emotion hybrid recognition detection method and device

Country Status (1)

Country Link
CN (1) CN107871113B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783669A (en) * 2019-01-21 2019-05-21 美的集团武汉制冷设备有限公司 Screen methods of exhibiting, robot and computer readable storage medium
CN110085229A (en) * 2019-04-29 2019-08-02 珠海景秀光电科技有限公司 Intelligent virtual foreign teacher information interacting method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622527A (en) * 2012-04-13 2012-08-01 西南大学 Taboo searching method for selection of galvanic skin response signal features
CN102737013A (en) * 2011-04-02 2012-10-17 三星电子(中国)研发中心 Device and method for identifying statement emotion based on dependency relation
CN103456314A (en) * 2013-09-03 2013-12-18 广州创维平面显示科技有限公司 Emotion recognition method and device
CN103488293A (en) * 2013-09-12 2014-01-01 北京航空航天大学 Man-machine motion interaction system and method based on expression recognition
CN104240720A (en) * 2013-06-24 2014-12-24 北京大学深圳研究生院 Voice emotion recognition method based on multi-fractal and information fusion
CN104538043A (en) * 2015-01-16 2015-04-22 北京邮电大学 Real-time emotion reminder for call
US20150206011A1 (en) * 2012-10-31 2015-07-23 Microsoft Technology Licensing, Llc Wearable emotion detection and feedback system
CN105205043A (en) * 2015-08-26 2015-12-30 苏州大学张家港工业技术研究院 Classification method and system of emotions of news readers
CN105260745A (en) * 2015-09-30 2016-01-20 西安沧海网络科技有限公司 Information push service system capable of carrying out emotion recognition and prediction based on big data

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102737013A (en) * 2011-04-02 2012-10-17 三星电子(中国)研发中心 Device and method for identifying statement emotion based on dependency relation
CN102622527A (en) * 2012-04-13 2012-08-01 西南大学 Taboo searching method for selection of galvanic skin response signal features
US20150206011A1 (en) * 2012-10-31 2015-07-23 Microsoft Technology Licensing, Llc Wearable emotion detection and feedback system
CN104240720A (en) * 2013-06-24 2014-12-24 北京大学深圳研究生院 Voice emotion recognition method based on multi-fractal and information fusion
CN103456314A (en) * 2013-09-03 2013-12-18 广州创维平面显示科技有限公司 Emotion recognition method and device
CN103488293A (en) * 2013-09-12 2014-01-01 北京航空航天大学 Man-machine motion interaction system and method based on expression recognition
CN104538043A (en) * 2015-01-16 2015-04-22 北京邮电大学 Real-time emotion reminder for call
CN105205043A (en) * 2015-08-26 2015-12-30 苏州大学张家港工业技术研究院 Classification method and system of emotions of news readers
CN105260745A (en) * 2015-09-30 2016-01-20 西安沧海网络科技有限公司 Information push service system capable of carrying out emotion recognition and prediction based on big data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于文本相关的语音情感识别研究;叶斌;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130515(第5期);I136-94 *
语音情感特征提取及识别方法研究;毛启容;《万方数据知识服务平台》;20110803;第1-132页 *

Also Published As

Publication number Publication date
CN107871113A (en) 2018-04-03

Similar Documents

Publication Publication Date Title
Dahake et al. Speaker dependent speech emotion recognition using MFCC and Support Vector Machine
CN103578468B (en) The method of adjustment and electronic equipment of a kind of confidence coefficient threshold of voice recognition
CN109473123A (en) Voice activity detection method and device
CN104318921B (en) Segment cutting detection method and system, method and system for evaluating spoken language
CN103559892B (en) Oral evaluation method and system
CN108389573B (en) Language identification method and device, training method and device, medium and terminal
CN110570873B (en) Voiceprint wake-up method and device, computer equipment and storage medium
CN108537702A (en) Foreign language teaching evaluation information generation method and device
JP2017156854A (en) Speech semantic analysis program, apparatus and method for improving comprehension accuracy of context semantic through emotion classification
CN105895078A (en) Speech recognition method used for dynamically selecting speech model and device
CN109686383B (en) Voice analysis method, device and storage medium
CN103559894A (en) Method and system for evaluating spoken language
US10311865B2 (en) System and method for automated speech recognition
CN111724770B (en) Audio keyword identification method for generating confrontation network based on deep convolution
CN107886968B (en) Voice evaluation method and system
Esmaili et al. Automatic classification of speech dysfluencies in continuous speech based on similarity measures and morphological image processing tools
Fulmare et al. Understanding and estimation of emotional expression using acoustic analysis of natural speech
CN111901627B (en) Video processing method and device, storage medium and electronic equipment
CN101231848A (en) Method for performing pronunciation error detecting based on holding vector machine
CN110085216A (en) A kind of vagitus detection method and device
CN109166569B (en) Detection method and device for phoneme mislabeling
CN109215647A (en) Voice awakening method, electronic equipment and non-transient computer readable storage medium
Gong et al. Vocalsound: A dataset for improving human vocal sounds recognition
CN109872714A (en) A kind of method, electronic equipment and storage medium improving accuracy of speech recognition
CN103578480B (en) The speech-emotion recognition method based on context correction during negative emotions detects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant