CN107871113A - A kind of method and apparatus of emotion mixing recognition detection - Google Patents

A kind of method and apparatus of emotion mixing recognition detection Download PDF

Info

Publication number
CN107871113A
CN107871113A CN201610863701.7A CN201610863701A CN107871113A CN 107871113 A CN107871113 A CN 107871113A CN 201610863701 A CN201610863701 A CN 201610863701A CN 107871113 A CN107871113 A CN 107871113A
Authority
CN
China
Prior art keywords
emotion
judgment value
identification
significant degree
mixing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610863701.7A
Other languages
Chinese (zh)
Other versions
CN107871113B (en
Inventor
刘国满
盛敬
黄志开
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Institute of Technology
Original Assignee
Nanchang Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Institute of Technology filed Critical Nanchang Institute of Technology
Priority to CN201610863701.7A priority Critical patent/CN107871113B/en
Publication of CN107871113A publication Critical patent/CN107871113A/en
Application granted granted Critical
Publication of CN107871113B publication Critical patent/CN107871113B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Theoretical Computer Science (AREA)
  • Social Psychology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present invention has sketched a kind of method and apparatus of emotion mixing recognition detection, and it is specially:Image recognition is carried out according to external expression and behavior by the first emotion unit, calculates the first judgment value and the first significant degree;Physiology identification is carried out according to physiological signal by the second emotion unit, calculates the second judgment value and the second significant degree;Relevant parameter measurement and calculating are carried out according to voice signal by the 3rd emotion unit, calculate the 3rd judgment value and the 3rd significant degree;By the 4th emotion unit according to the statistics of the number of emotion keyword and frequency in text, the 4th judgment value and the 4th significant degree are calculated;Finally, mixing judgment value is calculated, so as to which the affective state of measurand is judged and identified according to each judgment value computed above and significant degree by mixed feeling unit.

Description

A kind of method and apparatus of emotion mixing recognition detection
Technical field
The present invention relates to field of intelligent control, more particularly to a kind of method and apparatus of emotion mixing recognition detection.
Background technology
With the appearance of emotion intelligent robot or equipment, it is desirable to which these equipment can be according to the object detected, such as:People The characteristic information such as facial expression, action behavior, physiological signal, voice signal and text, matched with feature database, To judge the affective state of detected object.But the method for present emotion recognition mainly selects single features or multiple knowledges Weighted value set by other feature, an emotion value is calculated to mix, to judge the current affective state of detected object. It is a kind of static but this method all takes same calculating and determination methods to judge affective state for different objects Emotion identification method.Because different objects, the obvious degree of its each changing features be it is different, such as:What countenance enriched People, its changes in emotional are easy to judge affective state by Expression Recognition, so should increase for this kind of people outer Portion's Expression Recognition proportion.If using the affection computation method of this fixation for all objects, Judgment by emotion can be influenceed Accuracy, so proposing that one kind because of different objects, automatically forms a corresponding affection computation method and model, improves feelings here The accuracy that sense judges has very big necessity.
The content of the invention
In view of this, it is a primary object of the present invention to provide a kind of new emotion mixing recognition detection method and Device, the method detected with solving currently employed fixation emotion recognition bring the influence of correct judgment.
To reach above-mentioned purpose, the technical proposal of the invention is realized in this way;
The present invention provides a kind of method of emotion mixing recognition detection, and methods described includes:
External expression and behavior to institute's identification object carry out affection recognition of image, obtain the first judgment value and the first significant degree;
Physiological emotion identification is carried out to the physiological signal of institute's identification object, obtains the second judgment value and the second significant degree;
Speech emotion recognition is carried out to the voice signal of institute's identification object, obtains the 3rd judgment value and the 3rd significant degree;
Text emotion identification is carried out to the text of institute's identification object, obtains the 4th judgment value and the 4th significant degree;
The long-pending of gained that the judgment value corresponding to each emotion identification method is multiplied with significant degree respectively is added, described right to identify The mixing judgment value of elephant.
In such scheme, affection recognition of image is carried out in the external expression to institute's identification object and behavior, is obtained To the first judgment value and the first significant degree, it is specially:To the external expression of identification object, such as:Human face expression and behavior act Image procossing and feature extraction are carried out, is matched with expressive features storehouse, to calculate the correct judgement rate of various affective states, As this described first judgment value;And the first significant degree described in this, for representing under various affective states and normal state The size degree of lower expression shape change, hence it is evident that degree is higher, illustrates that this feature can more reflect profile of mood states.
In such scheme, physiological emotion identification is carried out in the physiological signal to institute's identification object, obtains second Judgment value and the second significant degree, methods described also include:Mainly by it is various measurement detection means to detect identification object Various physiological parameters, such as:Pulse, electrocardiogram etc., carry out identification object and be presently in accuracy in affective state, be described in this Second judgment value;And the second significant degree described in this, for representing with physiology under normal circumstances to join under various affective states Several change size degree.
In such scheme, speech emotion recognition is carried out in the voice signal to institute's identification object, obtains the 3rd Judgment value and the 3rd significant degree, it is specially:Mainly pass through included in identified object voice signal with emotion related fields Information and parameter, such as:The changes such as same long sentence pronunciation duration, fundamental frequency average value and maximum are current to judge The accuracy of present affective state, as the 3rd judgment value;The 3rd significant degree described in this, for representing in current emotion shape The size degree that state changes with voice signal under normal state.
In such scheme, text emotion identification is carried out in the text to institute's identification object, obtains the 4th Judgment value and the 4th significant degree, it is specially:Mainly closed by writing emotion appeared in word or text to the object identified The times or frequency of keyword, to judge the accuracy of the affective state of currently identified object, as the 4th judgment value;Described in this The 4th significant degree, for representing under current affective state with occurring emotion keyword in word under normal state or text Number changes size cases.
In such scheme, the judgment value corresponding to each emotion identification method is multiplied with significant degree respectively in described The product of gained is added, and to calculate the mixing judgment value of the object, it is specially:By the judgment value of every kind of emotion identification method With weighted value of the corresponding significant degree multiplied result as the emotion identification method, then the power by this variety of emotion identification method Weight values are added, to calculate the judgment value of mixing, to make the correct judgement of affective state.
In such scheme, not only include four species listed above in the method for described emotion mixing recognition detection Type emotion identification method, in addition to other types emotion identification method, and these four emotion identification methods are coordinations.
Present invention also offers a kind of device of emotion mixing recognition detection, and identification and detection device bag is mixed in the emotion Include:
First emotion unit, affection recognition of image is mainly carried out according to the external expression of institute's identification object and behavior, to calculate feelings Other first judgment value of perception and the first significant degree;
Second emotion unit, physiological emotion identification is mainly carried out according to the physiological signal of institute's identification object, known to calculate emotion Other second judgment value and the second significant degree;
3rd emotion unit, speech emotion recognition is mainly carried out according to the voice signal correlation emotion parameter of institute's identification object, To calculate the 3rd judgment value of emotion recognition and the 3rd significant degree;
4th emotion unit, mainly according in the word or text of institute's identification object emotion keyword occur times or frequency come Text emotion identification is carried out, to calculate the 4th judgment value of emotion recognition and the 4th significant degree;
Mixed feeling unit, aforementioned four emotion unit is mainly calculated into judgment value and significant degree multiplied result phase respectively Add, to calculate mixing judgment value, to make the correct judgement of affective state.
In said apparatus, the first emotion unit, the second emotion list in the device of described emotion mixing recognition detection Relation is coordination between member, the 3rd emotion unit and the 4th emotion unit, no sequencing, in addition, the present apparatus is not only Including these four emotion units, in addition to other kinds of emotion unit.
Brief description of the drawings
Fig. 1 mixes recognition detection procedure schematic diagram for a kind of emotion of the present invention;
Fig. 2 mixes identification and detection device modular structure schematic diagram for a kind of emotion of the present invention;
Embodiment
The method and apparatus of a kind of emotion mixing recognition detection of the present invention, mainly according to pair of different detections As various features show that intensity of variation is different under various affective states, and the dynamic emotion recognition of setting one detects Method and algorithm, improve the accuracy and correctness of emotion recognition, its specific operation process, as shown in Figure 1;
Emotion recognition detection means described in 1 object detected is carried out various affective state lower outer portion expressions, physiological signal, The collection and processing of the characteristic information such as voice signal and text;
2 are matched and are identified to the outside expression characteristic information of institute's detection object by the first emotion unit in the present apparatus, are calculated Go out the first judgment value of current affective state, and the outside according to the detection object under current affective state and under normal state The size of expressive features change, calculate corresponding first obvious angle value;
3 are detected and are measured to the physiological signal feature of institute's detection object by the second emotion unit in the present apparatus, such as:The heart of people The detection of the physiological signal such as electrograph and pulse, to calculate the second judgment value of current affective state, and according to the detection object The size of physiological signal change under current affective state and under normal state, to calculate corresponding second obvious angle value;
4 by calculating and statistics of the 3rd emotion unit in the present apparatus to the related emotion parameter of the voice signal of institute's detection object, Such as:The parameter such as long sentence duration, fundamental frequency average value and maximum in voice signal, to calculate current affective state The 3rd judgment value, and according to related emotion parameter in voice signal under current affective state and normal state under change it is big It is small, to calculate corresponding 3rd obvious angle value;
5 are occurred by emotion keyword in the 4th emotion unit writes out or said to institute's detection object in the present apparatus text Frequency and number, to calculate the 4th judgment value of current affective state, and according to the detection object in current affective state The size of the Parameters variation such as emotion keyword number and frequency appeared in text under lower and normal state, to calculate Corresponding 4th obvious angle value;
6 for four kinds of emotion units above, in the absence of the precedence relationship of processing data;Additionally, it is also possible to including other kinds of Emotion identification method and emotion unit;
7 are multiplied by mixed feeling unit to the corresponding judgment value that four kinds of emotion units calculate respectively above with significant degree, And be added described multiplied result, the mixing judgment value under the affective state is calculated, and according to described mixing judgment value To judge current affective state.

Claims (9)

  1. A kind of 1. method of emotion mixing recognition detection, it is characterised in that the emotion recognition detection method includes:
    External expression and behavior to institute's identification object carry out affection recognition of image, obtain the first judgment value and the first significant degree;
    Physiological emotion identification is carried out to the physiological signal of institute's identification object, obtains the second judgment value and the second significant degree;
    Speech emotion recognition is carried out to the voice signal of institute's identification object, obtains the 3rd judgment value and the 3rd significant degree;
    Text emotion identification is carried out to the text of institute's identification object, obtains the 4th judgment value and the 4th significant degree;
    The long-pending of gained that the judgment value corresponding to each emotion identification method is multiplied with significant degree respectively is added, described to calculate The mixing judgment value of object.
  2. 2. the method for emotion mixing recognition detection according to claim 1, it is characterised in that described to institute's identification object External expression and behavior carry out affection recognition of image, obtain the first judgment value and the first significant degree, be specially:To identification object External expression, such as:Human face expression and behavior act carry out image procossing and feature extraction, with the progress of expressive features storehouse Match somebody with somebody, be this described first judgment value to calculate the correct judgement rate of various affective states;And first described in this is obvious Degree, for representing the size degree of expression shape change under various affective states and under normal state, hence it is evident that degree is higher, illustrates the spy Sign can more reflect profile of mood states.
  3. 3. the method for emotion mixing recognition detection according to claim 1, it is characterised in that described to institute's identification object Physiological signal carry out physiological emotion identification, obtain the second judgment value and the second significant degree, methods described also includes:Mainly pass through The various physiological parameters of various measurement detection means identification objects to detect, such as:Pulse, electrocardiogram etc., carry out identification object and work as The accuracy of preceding present affective state, it is this described second judgment value;And the second significant degree described in this, for representing Under various affective states with the change size degree of physiological parameter under normal circumstances.
  4. 4. the method for emotion mixing recognition detection according to claim 1, it is characterised in that described to institute's identification object Voice signal carry out speech emotion recognition, obtain the 3rd judgment value and the 3rd significant degree, be specially:Mainly pass through what is identified Included in object voice signal with the information and parameter of emotion related fields, such as:Same long sentence pronunciation duration, fundamental tone The change such as average frequency value and maximum judges to be presently in the accuracy in affective state, as the 3rd judgment value;This institute The 3rd significant degree stated, for representing the size degree of the voice signal change under current affective state and normal state.
  5. 5. the method for emotion mixing recognition detection according to claim 1, it is characterised in that described to institute's identification object Text carry out text emotion identification, obtain the 4th judgment value and the 4th significant degree, be specially:Mainly by being identified Object write the times or frequency of emotion keyword appeared in word or text, to judge the emotion of currently identified object The accuracy of state, as the 4th judgment value;The 4th significant degree described in this, for represent under current affective state with usually There are the number change size cases of emotion keyword in word or text under state.
  6. 6. the method for emotion mixing recognition detection according to claim 1, it is characterised in that described respectively by each feelings Sense recognition methods corresponding to judgment value is multiplied with significant degree gained product be added, come calculate the mixing of the object judgement Value, it is specially:Using the judgment value of every kind of emotion identification method and corresponding significant degree multiplied result as the emotion identification method Weighted value, then the weighted value of this described a variety of emotion identification method is added, to calculate the judgment value of mixing, to make The correct judgement of affective state.
  7. 7. the method for emotion mixing recognition detection according to claim 1, it is characterised in that the emotion mixing described in this is known The method not detected not only includes four type emotion identification methods required by claim 1, in addition to other types feelings Feel recognition methods, and these four emotion identification methods are coordinations, no sequencing.
  8. 8. a kind of device of emotion mixing recognition detection, it is characterised in that the emotion mixing identification and detection device includes:
    First emotion unit, affection recognition of image is mainly carried out according to the external expression of institute's identification object and behavior, to calculate feelings Other first judgment value of perception and the first significant degree;
    Second emotion unit, physiological emotion identification is mainly carried out according to the physiological signal of institute's identification object, known to calculate emotion Other second judgment value and the second significant degree;
    3rd emotion unit, speech emotion recognition is mainly carried out according to the voice signal correlation emotion parameter of institute's identification object, To calculate the 3rd judgment value of emotion recognition and the 3rd significant degree;
    4th emotion unit, mainly according in the word or text of institute's identification object emotion keyword occur times or frequency come Text emotion identification is carried out, to calculate the 4th judgment value of emotion recognition and the 4th significant degree;
    Mixed feeling unit, aforementioned four emotion unit is mainly calculated into judgment value and significant degree multiplied result phase respectively Add, to calculate mixing judgment value, to make the correct judgement of affective state.
  9. 9. the device of emotion mixing recognition detection according to claim 8, it is characterised in that the first emotion unit, second Relation is coordination between emotion unit, the 3rd emotion unit and the 4th emotion unit, no sequencing, in addition, the present apparatus Not only include these four emotion units, in addition to other kinds of emotion unit.
CN201610863701.7A 2016-09-22 2016-09-22 Emotion hybrid recognition detection method and device Active CN107871113B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610863701.7A CN107871113B (en) 2016-09-22 2016-09-22 Emotion hybrid recognition detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610863701.7A CN107871113B (en) 2016-09-22 2016-09-22 Emotion hybrid recognition detection method and device

Publications (2)

Publication Number Publication Date
CN107871113A true CN107871113A (en) 2018-04-03
CN107871113B CN107871113B (en) 2021-06-25

Family

ID=61762009

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610863701.7A Active CN107871113B (en) 2016-09-22 2016-09-22 Emotion hybrid recognition detection method and device

Country Status (1)

Country Link
CN (1) CN107871113B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783669A (en) * 2019-01-21 2019-05-21 美的集团武汉制冷设备有限公司 Screen methods of exhibiting, robot and computer readable storage medium
CN110085229A (en) * 2019-04-29 2019-08-02 珠海景秀光电科技有限公司 Intelligent virtual foreign teacher information interacting method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102622527A (en) * 2012-04-13 2012-08-01 西南大学 Taboo searching method for selection of galvanic skin response signal features
CN102737013A (en) * 2011-04-02 2012-10-17 三星电子(中国)研发中心 Device and method for identifying statement emotion based on dependency relation
CN103456314A (en) * 2013-09-03 2013-12-18 广州创维平面显示科技有限公司 Emotion recognition method and device
CN103488293A (en) * 2013-09-12 2014-01-01 北京航空航天大学 Man-machine motion interaction system and method based on expression recognition
CN104240720A (en) * 2013-06-24 2014-12-24 北京大学深圳研究生院 Voice emotion recognition method based on multi-fractal and information fusion
CN104538043A (en) * 2015-01-16 2015-04-22 北京邮电大学 Real-time emotion reminder for call
US20150206011A1 (en) * 2012-10-31 2015-07-23 Microsoft Technology Licensing, Llc Wearable emotion detection and feedback system
CN105205043A (en) * 2015-08-26 2015-12-30 苏州大学张家港工业技术研究院 Classification method and system of emotions of news readers
CN105260745A (en) * 2015-09-30 2016-01-20 西安沧海网络科技有限公司 Information push service system capable of carrying out emotion recognition and prediction based on big data

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102737013A (en) * 2011-04-02 2012-10-17 三星电子(中国)研发中心 Device and method for identifying statement emotion based on dependency relation
CN102622527A (en) * 2012-04-13 2012-08-01 西南大学 Taboo searching method for selection of galvanic skin response signal features
US20150206011A1 (en) * 2012-10-31 2015-07-23 Microsoft Technology Licensing, Llc Wearable emotion detection and feedback system
CN104240720A (en) * 2013-06-24 2014-12-24 北京大学深圳研究生院 Voice emotion recognition method based on multi-fractal and information fusion
CN103456314A (en) * 2013-09-03 2013-12-18 广州创维平面显示科技有限公司 Emotion recognition method and device
CN103488293A (en) * 2013-09-12 2014-01-01 北京航空航天大学 Man-machine motion interaction system and method based on expression recognition
CN104538043A (en) * 2015-01-16 2015-04-22 北京邮电大学 Real-time emotion reminder for call
CN105205043A (en) * 2015-08-26 2015-12-30 苏州大学张家港工业技术研究院 Classification method and system of emotions of news readers
CN105260745A (en) * 2015-09-30 2016-01-20 西安沧海网络科技有限公司 Information push service system capable of carrying out emotion recognition and prediction based on big data

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
叶斌: "基于文本相关的语音情感识别研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
毛启容: "语音情感特征提取及识别方法研究", 《万方数据知识服务平台》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109783669A (en) * 2019-01-21 2019-05-21 美的集团武汉制冷设备有限公司 Screen methods of exhibiting, robot and computer readable storage medium
CN110085229A (en) * 2019-04-29 2019-08-02 珠海景秀光电科技有限公司 Intelligent virtual foreign teacher information interacting method and device

Also Published As

Publication number Publication date
CN107871113B (en) 2021-06-25

Similar Documents

Publication Publication Date Title
US10827967B2 (en) Emotional/behavioural/psychological state estimation system
CN108926338B (en) Heart rate prediction technique and device based on deep learning
Velloso et al. Qualitative activity recognition of weight lifting exercises
Goldstone et al. Altering object representations through category learning
CN105022929B (en) A kind of cognition accuracy analysis method of personal traits value test
Lindgaard et al. Attention web designers: You have 50 milliseconds to make a good first impression!
Rehg et al. Mobile health
Wohlgemuth et al. Linked control of syllable sequence and phonology in birdsong
CN100558290C (en) Electrophysiologicalintuition intuition indicator
CN109843179A (en) For detecting the combining classifiers of abnormal heart sound
CN107301863A (en) A kind of deaf-mute child's disfluency method of rehabilitation and rehabilitation training system
Kortelainen et al. Multimodal emotion recognition by combining physiological signals and facial expressions: a preliminary study
Javed et al. Toward an automated measure of social engagement for children with autism spectrum disorder—a personalized computational modeling approach
Nguyen et al. The dynamical approach to speech perception: From fine phonetic detail to abstract phonological categories
Jones et al. Biometric valence and arousal recognition
Yu et al. A hybrid user experience evaluation method for mobile games
CN106485085A (en) A kind of intellect service robot health identification system and method
Tan et al. Informing intelligent user interfaces by inferring affective states from body postures in ubiquitous computing environments
Hariharan et al. Blended emotion detection for decision support
CN110464303A (en) Sleep quality appraisal procedure and device
CN109044374A (en) It integrates audiovisual and continuously performs test method, apparatus and system
Bakhtiyari et al. Fuzzy model on human emotions recognition
CN108491519A (en) Man-machine interaction method and device, storage medium, terminal
CN107871113A (en) A kind of method and apparatus of emotion mixing recognition detection
Vail et al. Gender differences in facial expressions of affect during learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant