CN103456314A - Emotion recognition method and device - Google Patents

Emotion recognition method and device Download PDF

Info

Publication number
CN103456314A
CN103456314A CN2013103948340A CN201310394834A CN103456314A CN 103456314 A CN103456314 A CN 103456314A CN 2013103948340 A CN2013103948340 A CN 2013103948340A CN 201310394834 A CN201310394834 A CN 201310394834A CN 103456314 A CN103456314 A CN 103456314A
Authority
CN
China
Prior art keywords
emotion
value
word
emotion value
tree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013103948340A
Other languages
Chinese (zh)
Other versions
CN103456314B (en
Inventor
王鲜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Skyworth Flat Display Technology Co Ltd
Original Assignee
Guangzhou Skyworth Flat Display Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Skyworth Flat Display Technology Co Ltd filed Critical Guangzhou Skyworth Flat Display Technology Co Ltd
Priority to CN201310394834.0A priority Critical patent/CN103456314B/en
Publication of CN103456314A publication Critical patent/CN103456314A/en
Application granted granted Critical
Publication of CN103456314B publication Critical patent/CN103456314B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention is applicable to the field of communication, and provides an emotion recognition method and device. A first emotion value is obtained according to obtained emotion characteristics. Meanwhile, a second emotion value is obtained according to obtained voice signals; at the same time, an emotion word is found in voice test recognized through a voice recognition technology, and therefore a third emotion value corresponding to the emotion word is obtained. Then, according to the sum of the product of the first emotion value and a first weight, the product of the second emotion value and a second weight, and the product of the third emotion value and a third weight, an emotion value of a user can be determined. Therefore, according to the emotion value of the user, a mobile terminal can judge the current mood of the user and execute preset operation.

Description

A kind of emotion identification method and device
Technical field
The invention belongs to the communications field, relate in particular to a kind of emotion identification method and device.
Background technology
Speech emotional is to carry out the mutual of emotion communication with mobile terminal and user alternately, to allow mobile terminal can identify also perception user's mood, understand user's happiness, anger, grief and joy, and provide corresponding emotion response, thereby eliminate the estrangement between user and mobile terminal.At present, the speech emotional interaction technique has entered the every field such as industry, household electrical appliances, communication, automotive electronics, consumption electronic product.But existing speech emotional interaction technique has following defect:
Current interactive voice is mainly to rest on conversational mode, take the form of " question-response " as main;
It is stiff that reciprocal process shows slightly, and television terminal can't be given the feedback of user's emotion information;
And television terminal can not be understood user's happiness, anger, grief and joy fully.
Thereby the mutual experience sense of speech emotional is poor, shortcoming mutual understanding truly; Still there is certain estrangement between user and mobile terminal.
Summary of the invention
The object of the present invention is to provide a kind of method of emotion recognition, identify user's emotion with the facial image by obtaining the user and sound.
On the one hand, a kind of emotion identification method, described emotion identification method comprises:
The facial image got is carried out to affection recognition of image, obtain the first emotion value;
The voice signal got is carried out to speech emotional identification, obtain the second emotion value;
Find out the emotion word to obtain the 3rd emotion value that described emotion word is corresponding from speech text, described speech text generates by adopting speech recognition technology to process described voice signal;
According to described the first emotion value be multiplied by the first weight, described the second emotion value be multiplied by the second weight and described the 3rd emotion value be multiplied by the 3rd weight and, determine the user feeling value.。
On the one hand, the present invention also provides a kind of emotion recognition device, and described emotion recognition device comprises:
The first emotion value cell, carry out affection recognition of image for the facial image to getting, and obtains the first emotion value;
The second emotion value cell, carry out speech emotional identification for the voice signal to getting, and obtains the second emotion value;
The 3rd emotion value cell, for finding out the emotion word from speech text to obtain the 3rd emotion value that described emotion word is corresponding, described speech text generates by adopting speech recognition technology to process described voice signal;
The user feeling value cell, for according to described the first emotion value, be multiplied by the first weight, described the second emotion value be multiplied by the second weight and described the 3rd emotion value be multiplied by the 3rd weight and, determine the user feeling value.
On the one hand, the present invention also provides a kind of mobile terminal, and described mobile terminal comprises above-mentioned emotion recognition device.
In the present invention, the camera by mobile terminal obtains described facial image; If the expressive features of the expressive features that finds from the expressive features storehouse and extract from described facial image coupling, export described the first emotion value corresponding to described expressive features.Simultaneously, the identical phonetic feature of phonetic feature comprised with the phonetic feature storehouse if extract from described voice signal, export described the second emotion value that described phonetic feature is corresponding; Meanwhile, find out the emotion word to obtain the 3rd emotion value that described emotion word is corresponding from speech text, in described emotion tree, find out with described speech text in the emotion word that mates most of the emotion word meaning of a word, the position of the described emotion word mated most according to the meaning of a word in comprising the emotion tree of described emotion word, determine described the 3rd emotion value.Then, according to described the first emotion value be multiplied by the first weight, described the second emotion value be multiplied by the second weight and described the 3rd emotion value be multiplied by the 3rd weight and, determine the user feeling value.Thereby mobile terminal can judge user's mood at that time according to the user feeling value, carries out default operation.
The accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, below will the accompanying drawing of required use in embodiment or description of the Prior Art be briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 is the workflow diagram of the emotion identification method that provides of the embodiment of the present invention one;
Fig. 2 is the composition structural drawing of the emotion recognition device that provides of the embodiment of the present invention two.
Embodiment
In order to make purpose of the present invention, technical scheme and advantage clearer, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein, only in order to explain the present invention, is not intended to limit the present invention.
For technical solutions according to the invention are described, below by specific embodiment, describe.
embodiment mono-:
A kind of emotion identification method that the present embodiment provides, Fig. 1 shows the realization flow of this emotion identification method, for convenience of description, only shows the part relevant to the embodiment of the present invention.
A kind of emotion identification method, described emotion identification method comprises:
Step S11, carry out affection recognition of image to the facial image got, and obtains the first emotion value;
Step S12, carry out speech emotional identification to the voice signal got, and obtains the second emotion value;
Step S13 finds out the emotion word to obtain the 3rd emotion value that described emotion word is corresponding from speech text, and described speech text generates by adopting speech recognition technology to process described voice signal;
Step S14, according to described the first emotion value be multiplied by the first weight, described the second emotion value be multiplied by the second weight and described the 3rd emotion value be multiplied by the 3rd weight and, determine the user feeling value.
It should be noted that, the mankind are the sentient animals of tool, often show abundant emotion; The language of in this simultaneously, linking up for the mankind also comprises the abundant emotion word with emotion.Wherein, described emotion word is comprised of commendatory term, derogatory term and neutral words three classes.Can real-time judge go out user's emotion for the ease of mobile terminal, define the emotion value.In the present embodiment, the emotion value of commendatory term is positive number, and the emotion value of derogatory term is negative, and the emotion value of neutral words is 0; And, along with the emotion color of commendatory term is strong gradually, corresponding emotion value is increasing; And, along with the emotion color of derogatory term is strong gradually, corresponding emotion value is more and more less; For example: the emotion value of neutral words is 0, and worried emotion value is-0.5, and angry emotion value is-1, and joyful emotion value is 0.5, and glad emotion value is 1.
It should be noted that, described the first emotion value, described the second emotion value, described the 3rd emotion value or described user feeling value are described emotion value." first " wherein, " second ", " the 3rd " are only acute pyogenic infection of finger tip.
In addition, also it should be noted that, described emotion word can also be divided into by building form: have the emotion word of negative prefixes and do not have the emotion word of negative prefixes; Wherein, add in the described emotion word front that does not there is negative prefixes the emotion word generated after negative prefixes, the sign of its emotion value is contrary with the emotion value of the described emotion word that does not have a negative prefixes, and for example: glad emotion value is 1, and emotion value out of sorts is-1.
It should be noted that, after described the first weight, described the second weight and described the 3rd weight addition and be 1; In order to improve the accuracy of emotion recognition, identify and identify three classes identifications by the emotion word for affection recognition of image, speech emotional and there is different accuracy, the weight (comprising described the first weight, described the second weight and described the 3rd weight) that adjustment is given respectively described the first emotion value, described the second emotion value and described the 3rd emotion value.
Like this, the Real-time Obtaining facial image, and adopt the affection recognition of image technology to identify user's expression to the facial image got, and record this expression with the emotion word, thereby can obtain the emotion value that described emotion word is corresponding, for example: the image that the user who gets has a broad smile on one's face, the affection recognition of image technology identifies the user for glad, records corresponding emotion value and is: 1.
Meanwhile, gather user's voice signal, and adopt the speech emotional recognition technology to identify user's emotion to the voice signal got, and record this emotion with the emotion word, thus the emotion value that described emotion word is corresponding can be obtained, for example: if the user shouts the people loudly, get high-decibel and rapid voice signal, after adopting the speech emotional recognition technology to process, be judged to be anger, the emotion value of record is-1.
Meanwhile, to the voice signal gathered, adopt speech recognition technology to process, identify the voice content that comprises in voice signal and with the speech text record; Then, from described speech text, search whether there is the emotion word, if exist, record the emotion value that described emotion word is corresponding.For example: during user's indignation, say " I am very angry ", voice signal to the correspondence that collects adopts speech recognition technology to be identified, recognition result is recorded in speech text, in speech text, record " I am very angry ", owing to comprising " anger " this emotion word in " I am very angry ", the emotion value of recording is-1.
After for described the first emotion value, described the second emotion value and described the 3rd emotion value, determining respectively described the first weight, described the second weight and described the 3rd weight, calculate the user feeling value; Described user feeling value has given expression to user's mood at that time; Thereby, realize in real time according to the user expression and voice at that time, adopt the emotion identification method of the present embodiment to judge user's mood, thereby mobile terminal can be adjusted the interface of presenting to the user or carry out default operation.
As one embodiment of the invention, because mobile terminal carries camera, thereby, described the facial image got is carried out to affection recognition of image, before obtaining the step of the first emotion value, described emotion identification method also comprises:
Camera by mobile terminal obtains described facial image.
Like this, the camera that can carry by mobile terminal is taken pictures or is recorded a video the user in real time, to obtain user's facial image.
As one embodiment of the invention, because mobile terminal carries microphone, thereby, described the voice signal got is carried out to speech emotional identification, before obtaining the step of the second emotion value, described emotion identification method also comprises:
Microphone by mobile terminal obtains described voice signal.
Like this, the microphone by mobile terminal can Real-time Obtaining to the speaking of user, and output voice signal.
Preferably, described the facial image got is carried out to affection recognition of image, obtains the step of the first emotion value, be specially:
If the expressive features of the expressive features that finds from the expressive features storehouse and extract from described facial image coupling, export described the first emotion value corresponding to described expressive features.
It should be noted that, described expressive features storehouse is for needing to set up in advance according to coupling; Wherein, the user that described expressive features storehouse need to be mated for each, all store the expressive features of this user's various expressions; Wherein, described expressive features comprises: the conversion of the conversion at canthus, the corners of the mouth in people's face, the big or small expressive features information such as conversion show one's teeth.
It should be noted that, described expressive features storehouse can be set up in this locality, also can in server, set up, and for example in Cloud Server, sets up described expressive features storehouse.
Be based upon local situation for described expressive features storehouse, according to matching result, the emotion value corresponding to described expressive features that can directly obtain coupling usingd as described the first emotion value.
Situation about setting up at server for described expressive features storehouse, for the facial image got at every turn, after the expressive features gone out in this facial image to be extracted, all need to send this expressive features to server end; Server travels through described expressive features storehouse so that described expressive features is mated, wherein, matching process is to search to meet default matching threshold (wherein, described matching threshold is determined according to degree of accuracy and the coupling demand in the expressive features storehouse of setting up) expressive features, if find the expressive features of coupling, export emotion value corresponding to described expressive features feedback; After the emotion recognition device receives described emotion value, using described emotion value as described the first emotion value.
Preferably, described the voice signal got is carried out to speech emotional identification, obtains the step of the second emotion value, be specially:
The identical phonetic feature of phonetic feature comprised with the phonetic feature storehouse if extract from described voice signal, export described the second emotion value that described phonetic feature is corresponding.
It should be noted that, described phonetic feature storehouse is: therefore the module library of setting up for the phonetic feature coupling needs to meet coupling and requires and set up in advance; Wherein, the user that described phonetic feature storehouse need to be mated for each, all store this user's various phonetic features; Wherein, described phonetic feature comprises: the phonetic feature information such as the intensity of the single character that the intensity of whole voice and duration, speech recognition technology identify and duration; For example: the user that need to mate for each, the phonetic features such as the word speed while gathering different emotions (angry, worried, neutral, joyful, glad etc. emotion), sound size.
It should be noted that, described phonetic feature storehouse can be set up in this locality, also can in server, set up, and for example in Cloud Server, sets up described phonetic feature storehouse.
Be based upon local situation for described phonetic feature storehouse, according to matching result, the emotion value corresponding to described expressive features that can directly obtain coupling usingd as described the second emotion value.
Situation about setting up at server for described phonetic feature storehouse, all store the voice signal at every turn got, and described audio file be sent to server end with the form of audio file; Server travels through described phonetic feature storehouse and is mated with the described phonetic feature in audio file, wherein, matching process is to search to meet default voice match threshold value (wherein, described voice match threshold value is determined according to degree of accuracy and the coupling demand in the phonetic feature storehouse of setting up) phonetic feature, or find the phonetic feature of coupling, export emotion value feedback that described phonetic feature is corresponding; After the emotion recognition device receives described emotion value, using described emotion value as described the second emotion value.
Preferably, the method for calculating described the 3rd emotion value comprises:
Determine one or more seed emotion words, and take respectively each described seed emotion word and set up emotion tree as root node, root node and the child node of described emotion tree are the emotion word with same class emotion;
Position according to described emotion word in the emotion tree that comprises described emotion word, determine described the 3rd emotion value.
It should be noted that, for the ease of the emotion value of judgement emotion word, set up the emotion tree, and set up respectively an emotion tree for commendatory term, derogatory term and neutral words.
In the emotion tree that comprises all commendatory terms, from commendatory term, a selected word is as root node, all emotion words that emotion is stronger than this root node are divided into a zone, choose the child node of an emotion word as root node from this zone, emotion according to described child node, the child node that the emotion word stronger than described child node is divided into a sub regions and the emotion word selecting to make new advances is usingd as described child node by emotion in described zone, another child node that the emotion word by emotion in described zone a little less than than child node is divided into another subregion and the emotion word selecting to make new advances is usingd as described child node, meanwhile, all emotion words by emotion a little less than than this root node are divided into another zone, choose an emotion word another child node as root node from this zone, emotion according to described another child node, by the child node that strong emotion word is divided into a sub regions and the emotion word selecting to make new advances is usingd as described another child node than described another child node of emotion in described zone, another child node that emotion word by emotion in described zone a little less than than child node is divided into another subregion and the emotion word selecting to make new advances is usingd as described another child node, by that analogy, set up complete emotion tree.For example: selected " happiness ", as root node, its corresponding emotion value is: 1; Select " very delight " child node as root node, its emotion value is: 1.5; Select " joyful " child node as root node, its emotion value is: 0.5, by that analogy, set up the emotion tree.
In like manner, the method for the emotion tree that comprises all commendatory terms with foundation is analogized, and sets up the emotion tree that comprises all derogatory terms, and sets up the emotion tree that comprises all neutral words.
After establishing the emotion tree, the emotion value of each node in the emotion tree is also correspondingly determined; For example: for the emotion tree that comprises commendatory term, the emotion value of determining root node is 1; Then, determine the child node of node a little less than the emotion color is than root node, that the emotion value is 0.5 as root node, determine that the emotion color is stronger than root node, node that the emotion value is 1.5 is as another child node of root node; Then, determine that the emotion color is more weak than described child node (node that the emotion value is 0.5), node that the emotion value is 0.25 is as a child node of described child node (node that the emotion value is 0.5), determines that the emotion color is stronger than described child node (node that the emotion value is 0.5), node that the emotion value is 0.75 is as another child node of described child node (node that the emotion value is 0.5); Determine that the emotion color is more weak than described child node (node that the emotion value is 1.5), node that the emotion value is 1.25 is as a child node of described child node (node that the emotion value is 1.5), determines that the emotion color is stronger than described child node (node that the emotion value is 1.5), node that the emotion value is 1.75 is as another child node of described child node (node that the emotion value is 1.5); By that analogy, determine the emotion value of whole emotion tree.
After establishing emotion tree, if find out the emotion word identical with emotion word in described speech text in described emotion tree, directly obtain the emotion value that this emotion word is corresponding.
As one embodiment of the invention, in order to reduce the node of emotion tree, reduce the time of carrying out matched and searched in the emotion tree, the method for calculating described the 3rd emotion value also comprises:
In described emotion tree, find out with described speech text in the emotion word that mates most of the emotion word meaning of a word;
Described according to described emotion word the position in the emotion that comprises described emotion word tree, determine and be specially the step of described the 3rd emotion value:
The position of the described emotion word mated most according to the meaning of a word in comprising the emotion tree of described emotion word, determine described the 3rd emotion value.
Concrete, after in described speech text, finding the emotion word, according to the literal sense of this emotion word of putting down in writing in dictionary, search the node with the immediate emotion word of the literal sense of described emotion word place in described emotion tree; Particular location according to this node in described emotion tree, determine the 3rd emotion value of the described emotion word in described speech text.For example: the emotion value of neutral words is 0, and worried emotion value is-0.5, and angry emotion value is-1, and joyful emotion value is 0.5, and glad emotion value is 1; If the literal sense of the emotion word " indignation " that speech text comprises is the most approaching with " anger ", the emotion value of " indignation " is-1.
embodiment bis-:
Fig. 2 shows the composition structure of the emotion recognition device that the embodiment of the present invention two provides, and for convenience of description, only shows the part relevant to the embodiment of the present invention.
It should be noted that, the emotion identification method that the emotion recognition device that the present embodiment provides and embodiment mono-provide is mutually applicable.
A kind of emotion recognition device, described emotion recognition device comprises:
The first emotion value cell 21, carry out affection recognition of image for the facial image to getting, and obtains the first emotion value;
The second emotion value cell 22, carry out speech emotional identification for the voice signal to getting, and obtains the second emotion value;
The 3rd emotion value cell 23, for finding out the emotion word from speech text to obtain the 3rd emotion value that described emotion word is corresponding, described speech text generates by adopting speech recognition technology to process described voice signal;
User feeling value cell 24, for according to described the first emotion value, be multiplied by the first weight, described the second emotion value be multiplied by the second weight and described the 3rd emotion value be multiplied by the 3rd weight and, determine the user feeling value.
Preferably, described the first emotion value cell 21, specifically for:
If the expressive features of the expressive features that finds from the expressive features storehouse and extract from described facial image coupling, export described the first emotion value corresponding to described expressive features.
Preferably, described the second emotion value cell 22, specifically for:
The identical phonetic feature of phonetic feature comprised with the phonetic feature storehouse if extract from described voice signal, export described the second emotion value that described phonetic feature is corresponding.
Preferably, described the 3rd emotion value cell 23 also comprises the 3rd emotion value computing unit 231, and described the 3rd emotion value computing unit specifically comprises:
Emotion tree is set up unit 2311, for determining one or more seed emotion words, and take respectively each described seed emotion word and sets up the emotion tree as root node, and root node and the child node of described emotion tree are the emotion word with same class emotion;
Determining unit 2313, for the position in the emotion tree that comprises described emotion word according to described emotion word, determine described the 3rd emotion value.
Preferably, described the 3rd emotion value computing unit also comprises:
Matching unit 2312, in described emotion tree, find out with described speech text in the emotion word that mates most of the emotion word meaning of a word;
Described determining unit 2313, specifically for:
The position of the described emotion word mated most according to the meaning of a word in comprising the emotion tree of described emotion word, determine described the 3rd emotion value.
It should be noted that, each functional unit that the emotion recognition device that the present embodiment provides comprises can be the unit that software unit, hardware cell or software and hardware combine, and also can be used as independently suspension member and is integrated in mobile terminal or runs in the application system of this mobile terminal.
As one embodiment of the invention, the present invention also provides a kind of mobile terminal, and described mobile terminal comprises the emotion recognition device that embodiment bis-provides.Wherein said mobile terminal comprises that intelligent TV set, smart mobile phone, IPAD, intelligent robot etc. have the terminal of interactive function.
In embodiments of the present invention, the camera by mobile terminal obtains described facial image; If the expressive features of the expressive features that finds from the expressive features storehouse and extract from described facial image coupling, export described the first emotion value corresponding to described expressive features.Simultaneously, the identical phonetic feature of phonetic feature comprised with the phonetic feature storehouse if extract from described voice signal, export described the second emotion value that described phonetic feature is corresponding; Meanwhile, find out the emotion word to obtain the 3rd emotion value that described emotion word is corresponding from speech text, in described emotion tree, find out with described speech text in the emotion word that mates most of the emotion word meaning of a word, the position of the described emotion word mated most according to the meaning of a word in comprising the emotion tree of described emotion word, determine described the 3rd emotion value.Then, according to described the first emotion value be multiplied by the first weight, described the second emotion value be multiplied by the second weight and described the 3rd emotion value be multiplied by the 3rd weight and, determine the user feeling value.Thereby mobile terminal can judge user's mood at that time according to the user feeling value, carries out default operation.
It will be appreciated by those skilled in the art that the unit that comprises for above-described embodiment two is just divided according to function logic, but be not limited to above-mentioned division, as long as can realize corresponding function; In addition, the concrete title of each functional unit also, just for the ease of mutual differentiation, is not limited to protection scope of the present invention.
Those of ordinary skills it is also understood that, realize that all or part of step in above-described embodiment method is to come the hardware that instruction is relevant to complete by program, described program can be in being stored in a computer read/write memory medium, described storage medium, comprise ROM/RAM, disk, CD etc.
Above content is in conjunction with concrete preferred implementation further description made for the present invention, can not assert that specific embodiment of the invention is confined to these explanations.For the general technical staff of the technical field of the invention; make without departing from the inventive concept of the premise some alternative or obvious modification that are equal to; and performance or purposes identical, all should be considered as belonging to the present invention's scope of patent protection definite by submitted to claims.

Claims (10)

1. an emotion identification method, is characterized in that, described emotion identification method comprises:
The facial image got is carried out to affection recognition of image, obtain the first emotion value;
The voice signal got is carried out to speech emotional identification, obtain the second emotion value;
Find out the emotion word to obtain the 3rd emotion value that described emotion word is corresponding from speech text, described speech text generates by adopting speech recognition technology to process described voice signal;
According to described the first emotion value be multiplied by the first weight, described the second emotion value be multiplied by the second weight and described the 3rd emotion value be multiplied by the 3rd weight and, determine the user feeling value.
2. emotion identification method as claimed in claim 1, is characterized in that, described the facial image got carried out to affection recognition of image, obtains the step of the first emotion value, is specially:
If the expressive features of the expressive features that finds from the expressive features storehouse and extract from described facial image coupling, export described the first emotion value corresponding to described expressive features.
3. emotion identification method as claimed in claim 1, is characterized in that, described the voice signal got carried out to speech emotional identification, obtains the step of the second emotion value, is specially:
The identical phonetic feature of phonetic feature comprised with the phonetic feature storehouse if extract from described voice signal, export described the second emotion value that described phonetic feature is corresponding.
4. emotion identification method as claimed in claim 1, is characterized in that, the method for calculating described the 3rd emotion value comprises:
Determine one or more seed emotion words, and take respectively each described seed emotion word and set up emotion tree as root node, root node and the child node of described emotion tree are the emotion word with same class emotion;
Position according to described emotion word in the emotion tree that comprises described emotion word, determine described the 3rd emotion value.
5. emotion identification method as claimed in claim 4, is characterized in that, the method for calculating described the 3rd emotion value also comprises:
In described emotion tree, find out with described speech text in the emotion word that mates most of the emotion word meaning of a word;
Described according to described emotion word the position in the emotion that comprises described emotion word tree, determine and be specially the step of described the 3rd emotion value:
The position of the described emotion word mated most according to the meaning of a word in comprising the emotion tree of described emotion word, determine described the 3rd emotion value.
6. an emotion recognition device, is characterized in that, described emotion recognition device comprises:
The first emotion value cell, carry out affection recognition of image for the facial image to getting, and obtains the first emotion value;
The second emotion value cell, carry out speech emotional identification for the voice signal to getting, and obtains the second emotion value;
The 3rd emotion value cell, for finding out the emotion word from speech text to obtain the 3rd emotion value that described emotion word is corresponding, described speech text generates by adopting speech recognition technology to process described voice signal;
The user feeling value cell, for according to described the first emotion value, be multiplied by the first weight, described the second emotion value be multiplied by the second weight and described the 3rd emotion value be multiplied by the 3rd weight and, determine the user feeling value.
7. emotion recognition device as claimed in claim 6, is characterized in that, described the first emotion value cell, specifically for:
If the expressive features of the expressive features that finds from the expressive features storehouse and extract from described facial image coupling, export described the first emotion value corresponding to described expressive features.
8. emotion recognition device as claimed in claim 6, is characterized in that, described the second emotion value cell, specifically for:
The identical phonetic feature of phonetic feature comprised with the phonetic feature storehouse if extract from described voice signal, export described the second emotion value that described phonetic feature is corresponding.
9. emotion recognition device as claimed in claim 6, is characterized in that, described the 3rd emotion value cell also comprises the 3rd emotion value computing unit, and described the 3rd emotion value computing unit specifically comprises:
Emotion tree is set up unit, for determining one or more seed emotion words, and take respectively each described seed emotion word and sets up the emotion tree as root node, and root node and the child node of described emotion tree are the emotion word with same class emotion;
Determining unit, for the position in the emotion tree that comprises described emotion word according to described emotion word, determine described the 3rd emotion value.
10. emotion recognition device as claimed in claim 9, is characterized in that, described the 3rd emotion value computing unit also comprises:
Matching unit, in described emotion tree, find out with described speech text in the emotion word that mates most of the emotion word meaning of a word;
Described determining unit, specifically for:
The position of the described emotion word mated most according to the meaning of a word in comprising the emotion tree of described emotion word, determine described the 3rd emotion value.
CN201310394834.0A 2013-09-03 2013-09-03 A kind of emotion identification method and device Expired - Fee Related CN103456314B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310394834.0A CN103456314B (en) 2013-09-03 2013-09-03 A kind of emotion identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310394834.0A CN103456314B (en) 2013-09-03 2013-09-03 A kind of emotion identification method and device

Publications (2)

Publication Number Publication Date
CN103456314A true CN103456314A (en) 2013-12-18
CN103456314B CN103456314B (en) 2016-02-17

Family

ID=49738609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310394834.0A Expired - Fee Related CN103456314B (en) 2013-09-03 2013-09-03 A kind of emotion identification method and device

Country Status (1)

Country Link
CN (1) CN103456314B (en)

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104493827A (en) * 2014-11-17 2015-04-08 福建省泉州市第七中学 Intelligent cognitive robot and cognitive system thereof
CN104538043A (en) * 2015-01-16 2015-04-22 北京邮电大学 Real-time emotion reminder for call
CN105334743A (en) * 2015-11-18 2016-02-17 深圳创维-Rgb电子有限公司 Intelligent home control method and system based on emotion recognition
CN105404681A (en) * 2015-11-25 2016-03-16 广州酷狗计算机科技有限公司 Live broadcast sentiment classification method and apparatus
CN105488135A (en) * 2015-11-25 2016-04-13 广州酷狗计算机科技有限公司 Live content classification method and device
CN104102627B (en) * 2014-07-11 2016-10-26 合肥工业大学 A kind of multi-modal noncontact sentiment analysis record system
CN106228989A (en) * 2016-08-05 2016-12-14 易晓阳 A kind of interactive voice identification control method
CN106251871A (en) * 2016-08-05 2016-12-21 易晓阳 A kind of Voice command music this locality playing device
CN106297826A (en) * 2016-08-18 2017-01-04 竹间智能科技(上海)有限公司 Speech emotional identification system and method
CN106503646A (en) * 2016-10-19 2017-03-15 竹间智能科技(上海)有限公司 Multi-modal emotion identification system and method
CN106570496A (en) * 2016-11-22 2017-04-19 上海智臻智能网络科技股份有限公司 Emotion recognition method and device and intelligent interaction method and device
CN106910512A (en) * 2015-12-18 2017-06-30 株式会社理光 The analysis method of voice document, apparatus and system
CN107491435A (en) * 2017-08-14 2017-12-19 深圳狗尾草智能科技有限公司 Method and device based on Computer Automatic Recognition user feeling
CN107590503A (en) * 2016-07-07 2018-01-16 深圳狗尾草智能科技有限公司 A kind of robot affection data update method and system
CN107609458A (en) * 2016-07-20 2018-01-19 平安科技(深圳)有限公司 Emotional feedback method and device based on expression recognition
CN107657950A (en) * 2017-08-22 2018-02-02 广州小鹏汽车科技有限公司 Automobile speech control method, system and device based on high in the clouds and more order words
WO2018023516A1 (en) * 2016-08-04 2018-02-08 易晓阳 Voice interaction recognition and control method
CN107818787A (en) * 2017-10-31 2018-03-20 努比亚技术有限公司 A kind of processing method of voice messaging, terminal and computer-readable recording medium
CN107871113A (en) * 2016-09-22 2018-04-03 南昌工程学院 A kind of method and apparatus of emotion mixing recognition detection
CN107972028A (en) * 2017-07-28 2018-05-01 北京物灵智能科技有限公司 Man-machine interaction method, device and electronic equipment
CN107976919A (en) * 2017-07-28 2018-05-01 北京物灵智能科技有限公司 A kind of Study of Intelligent Robot Control method, system and electronic equipment
CN108039181A (en) * 2017-11-02 2018-05-15 北京捷通华声科技股份有限公司 The emotion information analysis method and device of a kind of voice signal
CN108115678A (en) * 2016-11-28 2018-06-05 深圳光启合众科技有限公司 Robot and its method of controlling operation and device
CN108305643A (en) * 2017-06-30 2018-07-20 腾讯科技(深圳)有限公司 The determination method and apparatus of emotion information
CN108305642A (en) * 2017-06-30 2018-07-20 腾讯科技(深圳)有限公司 The determination method and apparatus of emotion information
CN108334806A (en) * 2017-04-26 2018-07-27 腾讯科技(深圳)有限公司 Image processing method, device and electronic equipment
CN108427916A (en) * 2018-02-11 2018-08-21 上海复旦通讯股份有限公司 A kind of monitoring system and monitoring method of mood of attending a banquet for customer service
CN108536802A (en) * 2018-03-30 2018-09-14 百度在线网络技术(北京)有限公司 Exchange method based on children's mood and device
CN109015666A (en) * 2018-06-21 2018-12-18 肖鑫茹 A kind of intelligent robot
WO2019001458A1 (en) * 2017-06-30 2019-01-03 腾讯科技(深圳)有限公司 Method and device for determining emotion information
CN109784414A (en) * 2019-01-24 2019-05-21 出门问问信息科技有限公司 Customer anger detection method, device and electronic equipment in a kind of phone customer service
CN109829363A (en) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 Expression recognition method, device, computer equipment and storage medium
CN110049155A (en) * 2019-03-29 2019-07-23 中至数据集团股份有限公司 Image display method, system, readable storage medium storing program for executing and mobile phone shell
CN110598648A (en) * 2019-09-17 2019-12-20 江苏慧眼数据科技股份有限公司 Video face detection method, video face detection unit and system
CN110728983A (en) * 2018-07-16 2020-01-24 科大讯飞股份有限公司 Information display method, device, equipment and readable storage medium
CN110910903A (en) * 2019-12-04 2020-03-24 深圳前海微众银行股份有限公司 Speech emotion recognition method, device, equipment and computer readable storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6466385B2 (en) * 2016-10-11 2019-02-06 本田技研工業株式会社 Service providing apparatus, service providing method, and service providing program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005182368A (en) * 2003-12-18 2005-07-07 Seiko Epson Corp Expression image estimating device, expression image estimating method and its program
CN101604204A (en) * 2009-07-09 2009-12-16 北京科技大学 Distributed cognitive technology for intelligent emotional robot
CN102298694A (en) * 2011-06-21 2011-12-28 广东爱科数字科技有限公司 Man-machine interaction identification system applied to remote information service

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005182368A (en) * 2003-12-18 2005-07-07 Seiko Epson Corp Expression image estimating device, expression image estimating method and its program
CN101604204A (en) * 2009-07-09 2009-12-16 北京科技大学 Distributed cognitive technology for intelligent emotional robot
CN102298694A (en) * 2011-06-21 2011-12-28 广东爱科数字科技有限公司 Man-machine interaction identification system applied to remote information service

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张石清: "基于语音和人脸的情感识别研究", 《中国博士学位论文全文数据库 信息科技辑》, no. 5, 15 May 2013 (2013-05-15) *

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102627B (en) * 2014-07-11 2016-10-26 合肥工业大学 A kind of multi-modal noncontact sentiment analysis record system
CN104493827A (en) * 2014-11-17 2015-04-08 福建省泉州市第七中学 Intelligent cognitive robot and cognitive system thereof
CN104538043A (en) * 2015-01-16 2015-04-22 北京邮电大学 Real-time emotion reminder for call
US10013977B2 (en) 2015-11-18 2018-07-03 Shenzhen Skyworth-Rgb Electronic Co., Ltd. Smart home control method based on emotion recognition and the system thereof
CN105334743A (en) * 2015-11-18 2016-02-17 深圳创维-Rgb电子有限公司 Intelligent home control method and system based on emotion recognition
CN105334743B (en) * 2015-11-18 2018-10-26 深圳创维-Rgb电子有限公司 A kind of intelligent home furnishing control method and its system based on emotion recognition
CN105488135B (en) * 2015-11-25 2019-11-15 广州酷狗计算机科技有限公司 Live content classification method and device
CN105488135A (en) * 2015-11-25 2016-04-13 广州酷狗计算机科技有限公司 Live content classification method and device
CN105404681A (en) * 2015-11-25 2016-03-16 广州酷狗计算机科技有限公司 Live broadcast sentiment classification method and apparatus
CN106910512A (en) * 2015-12-18 2017-06-30 株式会社理光 The analysis method of voice document, apparatus and system
CN107590503A (en) * 2016-07-07 2018-01-16 深圳狗尾草智能科技有限公司 A kind of robot affection data update method and system
CN107609458A (en) * 2016-07-20 2018-01-19 平安科技(深圳)有限公司 Emotional feedback method and device based on expression recognition
WO2018023516A1 (en) * 2016-08-04 2018-02-08 易晓阳 Voice interaction recognition and control method
CN106251871A (en) * 2016-08-05 2016-12-21 易晓阳 A kind of Voice command music this locality playing device
CN106228989A (en) * 2016-08-05 2016-12-14 易晓阳 A kind of interactive voice identification control method
CN106297826A (en) * 2016-08-18 2017-01-04 竹间智能科技(上海)有限公司 Speech emotional identification system and method
CN107871113A (en) * 2016-09-22 2018-04-03 南昌工程学院 A kind of method and apparatus of emotion mixing recognition detection
CN107871113B (en) * 2016-09-22 2021-06-25 南昌工程学院 Emotion hybrid recognition detection method and device
CN106503646A (en) * 2016-10-19 2017-03-15 竹间智能科技(上海)有限公司 Multi-modal emotion identification system and method
CN106570496B (en) * 2016-11-22 2019-10-01 上海智臻智能网络科技股份有限公司 Emotion identification method and apparatus and intelligent interactive method and equipment
CN106570496A (en) * 2016-11-22 2017-04-19 上海智臻智能网络科技股份有限公司 Emotion recognition method and device and intelligent interaction method and device
CN108115678B (en) * 2016-11-28 2020-10-23 深圳光启合众科技有限公司 Robot and motion control method and device thereof
CN108115678A (en) * 2016-11-28 2018-06-05 深圳光启合众科技有限公司 Robot and its method of controlling operation and device
CN108334806B (en) * 2017-04-26 2021-12-14 腾讯科技(深圳)有限公司 Image processing method and device and electronic equipment
CN108334806A (en) * 2017-04-26 2018-07-27 腾讯科技(深圳)有限公司 Image processing method, device and electronic equipment
CN108305642A (en) * 2017-06-30 2018-07-20 腾讯科技(深圳)有限公司 The determination method and apparatus of emotion information
CN108305643A (en) * 2017-06-30 2018-07-20 腾讯科技(深圳)有限公司 The determination method and apparatus of emotion information
WO2019001458A1 (en) * 2017-06-30 2019-01-03 腾讯科技(深圳)有限公司 Method and device for determining emotion information
CN108305643B (en) * 2017-06-30 2019-12-06 腾讯科技(深圳)有限公司 Method and device for determining emotion information
CN108305642B (en) * 2017-06-30 2019-07-19 腾讯科技(深圳)有限公司 The determination method and apparatus of emotion information
CN107976919A (en) * 2017-07-28 2018-05-01 北京物灵智能科技有限公司 A kind of Study of Intelligent Robot Control method, system and electronic equipment
CN107976919B (en) * 2017-07-28 2019-11-15 北京物灵智能科技有限公司 A kind of Study of Intelligent Robot Control method, system and electronic equipment
CN107972028A (en) * 2017-07-28 2018-05-01 北京物灵智能科技有限公司 Man-machine interaction method, device and electronic equipment
CN107491435A (en) * 2017-08-14 2017-12-19 深圳狗尾草智能科技有限公司 Method and device based on Computer Automatic Recognition user feeling
CN107491435B (en) * 2017-08-14 2021-02-26 苏州狗尾草智能科技有限公司 Method and device for automatically identifying user emotion based on computer
CN107657950A (en) * 2017-08-22 2018-02-02 广州小鹏汽车科技有限公司 Automobile speech control method, system and device based on high in the clouds and more order words
CN107818787A (en) * 2017-10-31 2018-03-20 努比亚技术有限公司 A kind of processing method of voice messaging, terminal and computer-readable recording medium
CN108039181B (en) * 2017-11-02 2021-02-12 北京捷通华声科技股份有限公司 Method and device for analyzing emotion information of sound signal
CN108039181A (en) * 2017-11-02 2018-05-15 北京捷通华声科技股份有限公司 The emotion information analysis method and device of a kind of voice signal
CN108427916A (en) * 2018-02-11 2018-08-21 上海复旦通讯股份有限公司 A kind of monitoring system and monitoring method of mood of attending a banquet for customer service
CN108536802B (en) * 2018-03-30 2020-01-14 百度在线网络技术(北京)有限公司 Interaction method and device based on child emotion
CN108536802A (en) * 2018-03-30 2018-09-14 百度在线网络技术(北京)有限公司 Exchange method based on children's mood and device
CN109015666A (en) * 2018-06-21 2018-12-18 肖鑫茹 A kind of intelligent robot
CN110728983A (en) * 2018-07-16 2020-01-24 科大讯飞股份有限公司 Information display method, device, equipment and readable storage medium
CN110728983B (en) * 2018-07-16 2024-04-30 科大讯飞股份有限公司 Information display method, device, equipment and readable storage medium
CN109829363A (en) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 Expression recognition method, device, computer equipment and storage medium
CN109784414A (en) * 2019-01-24 2019-05-21 出门问问信息科技有限公司 Customer anger detection method, device and electronic equipment in a kind of phone customer service
CN110049155A (en) * 2019-03-29 2019-07-23 中至数据集团股份有限公司 Image display method, system, readable storage medium storing program for executing and mobile phone shell
CN110598648A (en) * 2019-09-17 2019-12-20 江苏慧眼数据科技股份有限公司 Video face detection method, video face detection unit and system
CN110598648B (en) * 2019-09-17 2023-05-09 无锡慧眼人工智能科技有限公司 Video face detection method, video face detection unit and system
CN110910903B (en) * 2019-12-04 2023-03-21 深圳前海微众银行股份有限公司 Speech emotion recognition method, device, equipment and computer readable storage medium
CN110910903A (en) * 2019-12-04 2020-03-24 深圳前海微众银行股份有限公司 Speech emotion recognition method, device, equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN103456314B (en) 2016-02-17

Similar Documents

Publication Publication Date Title
CN103456314B (en) A kind of emotion identification method and device
CN108326855A (en) A kind of exchange method of robot, device, equipment and storage medium
US10679063B2 (en) Recognizing salient video events through learning-based multimodal analysis of visual features and audio-based analytics
CN105446146B (en) Intelligent terminal control method, system and intelligent terminal based on semantic analysis
CN110598576B (en) Sign language interaction method, device and computer medium
WO2019000991A1 (en) Voice print recognition method and apparatus
JP2018014094A (en) Virtual robot interaction method, system, and robot
CN110491383A (en) A kind of voice interactive method, device, system, storage medium and processor
CN102868830A (en) Switching control method and device of mobile terminal themes
CN110602516A (en) Information interaction method and device based on live video and electronic equipment
WO2020253064A1 (en) Speech recognition method and apparatus, and computer device and storage medium
CN107972028A (en) Man-machine interaction method, device and electronic equipment
CN110706707B (en) Method, apparatus, device and computer-readable storage medium for voice interaction
CN108491421A (en) A kind of method, apparatus, equipment and computer storage media generating question and answer
CN112434139A (en) Information interaction method and device, electronic equipment and storage medium
CN104267922A (en) Information processing method and electronic equipment
CN109710799B (en) Voice interaction method, medium, device and computing equipment
CN110910898B (en) Voice information processing method and device
CN107480291B (en) emotion interaction method based on humor generation and robot system
CN111883101B (en) Model training and speech synthesis method, device, equipment and medium
CN111506183A (en) Intelligent terminal and user interaction method
CN106844734B (en) Method for automatically generating session reply content
CN109192211A (en) A kind of method, device and equipment of voice signal identification
Galvan et al. Audiovisual affect recognition in spontaneous filipino laughter
US11150923B2 (en) Electronic apparatus and method for providing manual thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160217

Termination date: 20170903