CN103456314B - A kind of emotion identification method and device - Google Patents

A kind of emotion identification method and device Download PDF

Info

Publication number
CN103456314B
CN103456314B CN201310394834.0A CN201310394834A CN103456314B CN 103456314 B CN103456314 B CN 103456314B CN 201310394834 A CN201310394834 A CN 201310394834A CN 103456314 B CN103456314 B CN 103456314B
Authority
CN
China
Prior art keywords
emotion
value
word
emotion value
tree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310394834.0A
Other languages
Chinese (zh)
Other versions
CN103456314A (en
Inventor
王鲜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Skyworth Flat Display Technology Co Ltd
Original Assignee
Guangzhou Skyworth Flat Display Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Skyworth Flat Display Technology Co Ltd filed Critical Guangzhou Skyworth Flat Display Technology Co Ltd
Priority to CN201310394834.0A priority Critical patent/CN103456314B/en
Publication of CN103456314A publication Critical patent/CN103456314A/en
Application granted granted Critical
Publication of CN103456314B publication Critical patent/CN103456314B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The present invention is applicable to the communications field, provides a kind of emotion identification method and device, gets the first emotion value according to the expressive features got.Meanwhile, the second emotion value is obtained according to the voice signal got; Meanwhile, from the speech text that speech recognition technology identifies, emotion word is found out to obtain the 3rd emotion value corresponding to described emotion word.Then, be multiplied by the first weight according to described first emotion value, described second emotion value is multiplied by the second weight and described 3rd emotion value be multiplied by the 3rd weight and, determine user feeling value.Thus mobile terminal can judge user's mood at that time according to user feeling value, perform the operation preset.

Description

A kind of emotion identification method and device
Technical field
The invention belongs to the communications field, particularly relate to a kind of emotion identification method and device.
Background technology
Speech emotional is carry out the mutual of emotion communication with mobile terminal and user alternately, mobile terminal can be identified and the mood of perception user, understand the happiness, anger, grief and joy of user, and provide corresponding emotion response, thus eliminate the estrangement between user and mobile terminal.At present, speech emotional interaction technique has entered the every field such as industry, household electrical appliances, communication, automotive electronics, consumption electronic product.But existing speech emotional interaction technique has following defect:
Current interactive voice mainly rests in conversational mode, based on the form of " question-response ";
Reciprocal process shows slightly stiff, and television terminal cannot give the feedback of user's emotion information;
And television terminal can not understand the happiness, anger, grief and joy of user completely.
Thus the mutual experience sense of speech emotional is poor, shortcoming mutual understanding truly; Between user and mobile terminal, still there is certain estrangement.
Summary of the invention
The object of the present invention is to provide a kind of method of emotion recognition, identifying the emotion of user with the facial image and sound by obtaining user.
On the one hand, a kind of emotion identification method, described emotion identification method comprises:
Affection recognition of image is carried out to the facial image got, obtains the first emotion value;
Speech emotion recognition is carried out to the voice signal got, obtains the second emotion value;
From speech text, find out emotion word to obtain the 3rd emotion value corresponding to described emotion word, described speech text generates by adopting voice signal described in speech recognition technology process;
Be multiplied by the first weight according to described first emotion value, described second emotion value is multiplied by the second weight and described 3rd emotion value be multiplied by the 3rd weight and, determine user feeling value.。
On the one hand, the present invention also provides a kind of emotion recognition device, and described emotion recognition device comprises:
First emotion value cell, for carrying out affection recognition of image to the facial image got, obtains the first emotion value;
Second emotion value cell, for carrying out speech emotion recognition to the voice signal got, obtains the second emotion value;
3rd emotion value cell, for finding out emotion word to obtain the 3rd emotion value corresponding to described emotion word from speech text, described speech text generates by adopting voice signal described in speech recognition technology process;
User feeling value cell, for being multiplied by the first weight according to described first emotion value, described second emotion value is multiplied by the second weight and described 3rd emotion value be multiplied by the 3rd weight and, determine user feeling value.
On the one hand, the present invention also provides a kind of mobile terminal, and described mobile terminal comprises above-mentioned emotion recognition device.
In the present invention, described facial image is obtained by the camera of mobile terminal; If find the expressive features of mating with the expressive features extracted from described facial image from expressive features storehouse, then export the described first emotion value that described expressive features is corresponding.Meanwhile, if extract the phonetic feature identical with the phonetic feature that phonetic feature storehouse comprises from described voice signal, then export the described second emotion value that described phonetic feature is corresponding; Meanwhile, emotion word is found out to obtain the 3rd emotion value corresponding to described emotion word from speech text, the emotion word of mating most with the emotion word meaning of a word in described speech text is found out in described emotion tree, according to the position of described emotion word in the emotion tree comprising described emotion word that the meaning of a word mates most, determine described 3rd emotion value.Then, be multiplied by the first weight according to described first emotion value, described second emotion value is multiplied by the second weight and described 3rd emotion value be multiplied by the 3rd weight and, determine user feeling value.Thus mobile terminal can judge user's mood at that time according to user feeling value, perform the operation preset.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme in the embodiment of the present invention, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is the workflow diagram of the emotion identification method that the embodiment of the present invention one provides;
Fig. 2 is the composition structural drawing of the emotion recognition device that the embodiment of the present invention two provides.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
In order to technical solutions according to the invention are described, be described below by specific embodiment.
embodiment one:
A kind of emotion identification method that the present embodiment provides, Fig. 1 shows the realization flow of this emotion identification method, for convenience of description, illustrate only the part relevant to the embodiment of the present invention.
A kind of emotion identification method, described emotion identification method comprises:
Step S11, carries out affection recognition of image to the facial image got, and obtains the first emotion value;
Step S12, carries out speech emotion recognition to the voice signal got, and obtains the second emotion value;
Step S13, finds out emotion word to obtain the 3rd emotion value corresponding to described emotion word from speech text, and described speech text generates by adopting voice signal described in speech recognition technology process;
Step S14, be multiplied by the first weight according to described first emotion value, described second emotion value is multiplied by the second weight and described 3rd emotion value be multiplied by the 3rd weight and, determine user feeling value.
It should be noted that, the mankind are the sentient animals of tool, often show abundant emotion; Simultaneously, the language for human communication also comprises the abundant emotion word with emotion.Wherein, described emotion word by commendatory term, derogatory term and neutral words three class form.Real-time judge can go out the emotion of user for the ease of mobile terminal, define emotion value.In the present embodiment, the emotion value of commendatory term is positive number, and the emotion value of derogatory term is negative, and the emotion value of neutral words is 0; Further, along with the emotional color of commendatory term is strong gradually, corresponding emotion value is increasing; Further, along with the emotional color of derogatory term is strong gradually, corresponding emotion value is more and more less; Such as: the emotion value of neutral words is 0, the emotion value of worrying is-0.5, and angry emotion value is-1, and joyful emotion value is 0.5, and glad emotion value is 1.
It should be noted that, described first emotion value, described second emotion value, described 3rd emotion value or described user feeling value are described emotion value." first ", " second ", " the 3rd " is wherein only acute pyogenic infection of finger tip.
In addition, also it should be noted that, described emotion word can also be divided into by building form: have the emotion word of negative prefixes and do not have the emotion word of negative prefixes; Wherein, the emotion word generated add negative prefixes before the described emotion word without negative prefixes after, the sign of its emotion value is contrary with the described emotion value without the emotion word of negative prefixes, such as: glad emotion value is 1, and emotion value out of sorts is-1.
It should be noted that, described first weight, described second weight and described 3rd weight be added after and be 1; In order to improve the accuracy of emotion recognition, for affection recognition of image, speech emotion recognition and by the identification of emotion word identification three class, there is different accuracy, adjust the weight (comprising described first weight, described second weight and described 3rd weight) that described first emotion value, described second emotion value and described 3rd emotion value are given respectively.
Like this, Real-time Obtaining facial image, and adopt affection recognition of image technology to identify the expression of user to the facial image got, and with this expression of emotion word record, thus emotion value corresponding to described emotion word can be obtained, such as: the image that the user got has a broad smile on one's face, then affection recognition of image technology identifies user for glad, and the emotion value that record is corresponding is: 1.
Meanwhile, gather the voice signal of user, and adopt speech emotion recognition technology to identify the emotion of user to the voice signal got, and with this emotion of emotion word record, thus emotion value corresponding to described emotion word can be obtained, such as: if user shouts people loudly, then get high-decibel and rapid voice signal, after adopting speech emotion recognition technical finesse, be judged to be anger, the emotion value of record is-1.
Meanwhile, speech recognition technology process is adopted to the voice signal gathered, identify the voice content that comprises in voice signal and with speech text record; Then, search whether there is emotion word from described speech text, if exist, then record the emotion value that described emotion word is corresponding.Such as: say " I am very angry " during user's indignation, speech recognition technology is adopted to identify to the voice signal of the correspondence collected, recognition result is recorded in speech text, then record " I am very angry " in speech text, owing to comprising " anger " this emotion word in " I am very angry ", then the emotion value recorded is-1.
After determining described first weight, described second weight and described 3rd weight respectively for described first emotion value, described second emotion value and described 3rd emotion value, calculate user feeling value; Described user feeling value has given expression to user's mood at that time; Thus, realize, in real time according to user's expression at that time and voice, adopting the emotion identification method of the present embodiment to judge the mood of user, thus mobile terminal can adjust the interface of presenting to user or perform the operation preset.
As one embodiment of the invention, because mobile terminal carries camera, thus the described facial image to getting carries out affection recognition of image, and before obtaining the step of the first emotion value, described emotion identification method also comprises:
Described facial image is obtained by the camera of mobile terminal.
Like this, the camera that can be carried by mobile terminal is taken pictures to user in real time or is recorded a video, to obtain the facial image of user.
As one embodiment of the invention, because mobile terminal carries microphone, thus the described voice signal to getting carries out speech emotion recognition, and before obtaining the step of the second emotion value, described emotion identification method also comprises:
Described voice signal is obtained by the microphone of mobile terminal.
Like this, can Real-time Obtaining speaking to user by the microphone of mobile terminal, and export voice signal.
Preferably, the described facial image to getting carries out affection recognition of image, obtains the step of the first emotion value, is specially:
If find the expressive features of mating with the expressive features extracted from described facial image from expressive features storehouse, then export the described first emotion value that described expressive features is corresponding.
It should be noted that, described expressive features storehouse is for setting up in advance according to coupling needs; Wherein, described expressive features storehouse, for each user needing coupling, all stores the expressive features of the various expressions of this user; Wherein, described expressive features comprises: the expressive features information such as conversion, the conversion of the corners of the mouth, the conversion of the size that shows one's teeth at canthus in face.
It should be noted that, described expressive features storehouse can be set up in this locality, also can set up in the server, such as, in Cloud Server, set up described expressive features storehouse.
Be based upon local situation for described expressive features storehouse, according to matching result, emotion value corresponding to the described expressive features that can directly obtain mating is using as described first emotion value.
For the situation that described expressive features storehouse is set up at server, for the facial image got at every turn, to be extracted go out after expressive features in this facial image, all need to send this expressive features to server end; Server travels through described expressive features storehouse to mate described expressive features, wherein, matching process is search to meet default matching threshold (wherein, described matching threshold according to set up expressive features storehouse degree of accuracy and coupling demand and determine) expressive features, if find the expressive features of coupling, export emotion value corresponding to described expressive features and feed back; Receive after described emotion value until emotion recognition device, using described emotion value as described first emotion value.
Preferably, the described voice signal to getting carries out speech emotion recognition, obtains the step of the second emotion value, is specially:
If extract the phonetic feature identical with the phonetic feature that phonetic feature storehouse comprises from described voice signal, then export the described second emotion value that described phonetic feature is corresponding.
It should be noted that, described phonetic feature storehouse is: mate the module library set up for phonetic feature, and therefore demand fulfillment coupling requires and sets up in advance; Wherein, described phonetic feature storehouse, for each user needing coupling, all stores the various phonetic features of this user; Wherein, described phonetic feature comprises: the voice characteristics information such as intensity and duration of the single character that the intensity of whole voice and duration, speech recognition technology identify; Such as: for each user needing coupling, gather the phonetic feature such as word speed, sound size time different emotions (angry, worried, neutral, joyful, glad etc. emotion).
It should be noted that, described phonetic feature storehouse can be set up in this locality, also can set up in the server, such as, in Cloud Server, set up described phonetic feature storehouse.
Be based upon local situation for described phonetic feature storehouse, according to matching result, emotion value corresponding to the described expressive features that can directly obtain mating is using as described second emotion value.
For the situation that described phonetic feature storehouse is set up at server, the voice signal at every turn got all is stored with the form of audio file, and described audio file is sent to server end; Server travels through described phonetic feature storehouse to mate the described phonetic feature in audio file, wherein, matching process is search to meet default voice match threshold value (wherein, described voice match threshold value according to set up phonetic feature storehouse degree of accuracy and coupling demand and determine) phonetic feature, or find the phonetic feature of coupling, export emotion value corresponding to described phonetic feature and feed back; Receive after described emotion value until emotion recognition device, using described emotion value as described second emotion value.
Preferably, the method calculating described 3rd emotion value comprises:
Determine one or more seed emotion word, and respectively with each described seed emotion word for root node sets up emotion tree, root node and the child node of described emotion tree are the emotion word with same class emotion;
According to the position of described emotion word in the emotion tree comprising described emotion word, determine described 3rd emotion value.
It should be noted that, for the ease of judging the emotion value of emotion word, establishing emotion tree, and set up an emotion tree respectively for commendatory term, derogatory term and neutral words.
In the emotion tree comprising all commendatory terms, from commendatory term, a selected word is as root node, all emotion word stronger than this root node for emotion are divided into a region, the child node of an emotion word as root node is chosen from this region, according to the emotion of described child node, emotion word stronger than described child node for emotion in described region being divided into a sub regions also selects the emotion word made new advances using the child node as described child node, emotion word more weak than child node for emotion in described region is divided into another subregion and also selects the emotion word made new advances using another child node as described child node, meanwhile, all emotion word more weak than this root node for emotion are divided into another region, an emotion word another child node as root node is chosen from this region, according to the emotion of another child node described, emotion word stronger than another child node described for emotion in described region is divided into a sub regions also selects the emotion word made new advances using the child node as another child node described, emotion word more weak than child node for emotion in described region being divided into another subregion also selects the emotion word made new advances using another child node as another child node described, by that analogy, complete emotion tree is set up.Such as: selected " happiness ", as root node, the emotion value of its correspondence is: 1; Select " very delight " as the child node of root node, its emotion value is: 1.5; Select " joyful " as the child node of root node, its emotion value is: 0.5, by that analogy, sets up emotion tree.
In like manner, analogize with the method setting up the emotion tree comprising all commendatory terms, set up the emotion tree comprising all derogatory terms, and set up the emotion tree comprising all neutral words.
After establishing emotion tree, the emotion value of each node in emotion tree is also correspondingly determined; Such as: for the emotion tree comprising commendatory term, determine that the emotion value of root node is 1; Then, determine emotional color more weak than root node, emotion value be the node of 0.5 as a child node of root node, determine emotional color stronger than root node, emotion value is that the node of 1.5 is as another child node of root node; Then, determine emotional color more weak than described child node (emotion value is the node of 0.5), emotion value be the node of 0.25 as a child node of described child node (emotion value is the node of 0.5), determine emotional color stronger than described child node (emotion value is the node of 0.5), emotion value is that the node of 0.75 is as another child node of described child node (emotion value is the node of 0.5); Determine emotional color more weak than described child node (emotion value is the node of 1.5), emotion value be the node of 1.25 as a child node of described child node (emotion value is the node of 1.5), determine emotional color stronger than described child node (emotion value is the node of 1.5), emotion value is that the node of 1.75 is as another child node of described child node (emotion value is the node of 1.5); By that analogy, the emotion value that whole emotion is set is determined.
After establishing emotion tree, if find out the emotion word identical with the emotion word in described speech text in described emotion tree, then directly obtain the emotion value that this emotion word is corresponding.
As one embodiment of the invention, in order to reduce the node of emotion tree, reduce the time of carrying out matched and searched in emotion tree, the method calculating described 3rd emotion value also comprises:
The emotion word of mating most with the emotion word meaning of a word in described speech text is found out in described emotion tree;
Described according to the position of described emotion word in the emotion tree comprising described emotion word, determine the step of described 3rd emotion value, be specially:
According to the position of described emotion word in the emotion tree comprising described emotion word that the meaning of a word mates most, determine described 3rd emotion value.
Concrete, after finding emotion word in described speech text, according to the literal sense of this emotion word recorded in dictionary, in described emotion tree, search the node at the immediate emotion word place with the literal sense of described emotion word; According to the particular location of this node in described emotion tree, determine the 3rd emotion value of the described emotion word in described speech text.Such as: the emotion value of neutral words is 0, the emotion value of worrying is-0.5, and angry emotion value is-1, and joyful emotion value is 0.5, and glad emotion value is 1; If the literal sense of the emotion word " indignation " that speech text comprises is closest with " anger ", then the emotion value of " indignation " is-1.
embodiment two:
Fig. 2 shows the composition structure of the emotion recognition device that the embodiment of the present invention two provides, and for convenience of description, illustrate only the part relevant to the embodiment of the present invention.
It should be noted that, the emotion recognition device that the present embodiment provides and the emotion identification method that embodiment one provides are suitable for mutually.
A kind of emotion recognition device, described emotion recognition device comprises:
First emotion value cell 21, for carrying out affection recognition of image to the facial image got, obtains the first emotion value;
Second emotion value cell 22, for carrying out speech emotion recognition to the voice signal got, obtains the second emotion value;
3rd emotion value cell 23, for finding out emotion word to obtain the 3rd emotion value corresponding to described emotion word from speech text, described speech text generates by adopting voice signal described in speech recognition technology process;
User feeling value cell 24, for being multiplied by the first weight according to described first emotion value, described second emotion value is multiplied by the second weight and described 3rd emotion value be multiplied by the 3rd weight and, determine user feeling value.
Preferably, described first emotion value cell 21, specifically for:
If find the expressive features of mating with the expressive features extracted from described facial image from expressive features storehouse, then export the described first emotion value that described expressive features is corresponding.
Preferably, described second emotion value cell 22, specifically for:
If extract the phonetic feature identical with the phonetic feature that phonetic feature storehouse comprises from described voice signal, then export the described second emotion value that described phonetic feature is corresponding.
Preferably, described 3rd emotion value cell 23 also comprises the 3rd emotion value computing unit 231, and described 3rd emotion value computing unit specifically comprises:
Emotion tree sets up unit 2311, for determining one or more seed emotion word, and respectively with each described seed emotion word for root node sets up emotion tree, root node and the child node of described emotion tree are the emotion word with same class emotion;
Determining unit 2313, for according to the position of described emotion word in the emotion tree comprising described emotion word, determines described 3rd emotion value.
Preferably, described 3rd emotion value computing unit also comprises:
Matching unit 2312, for finding out the emotion word of mating most with the emotion word meaning of a word in described speech text in described emotion tree;
Described determining unit 2313, specifically for:
According to the position of described emotion word in the emotion tree comprising described emotion word that the meaning of a word mates most, determine described 3rd emotion value.
It should be noted that, each functional unit that the emotion recognition device that the present embodiment provides comprises can be the unit that software unit, hardware cell or software and hardware combine, and also can be integrated in mobile terminal as independently suspension member or run in the application system of this mobile terminal.
As one embodiment of the invention, the present invention also provides a kind of mobile terminal, and described mobile terminal comprises the emotion recognition device that embodiment two provides.Wherein said mobile terminal comprises the terminal that intelligent TV set, smart mobile phone, IPAD, intelligent robot etc. have interactive function.
In embodiments of the present invention, described facial image is obtained by the camera of mobile terminal; If find the expressive features of mating with the expressive features extracted from described facial image from expressive features storehouse, then export the described first emotion value that described expressive features is corresponding.Meanwhile, if extract the phonetic feature identical with the phonetic feature that phonetic feature storehouse comprises from described voice signal, then export the described second emotion value that described phonetic feature is corresponding; Meanwhile, emotion word is found out to obtain the 3rd emotion value corresponding to described emotion word from speech text, the emotion word of mating most with the emotion word meaning of a word in described speech text is found out in described emotion tree, according to the position of described emotion word in the emotion tree comprising described emotion word that the meaning of a word mates most, determine described 3rd emotion value.Then, be multiplied by the first weight according to described first emotion value, described second emotion value is multiplied by the second weight and described 3rd emotion value be multiplied by the 3rd weight and, determine user feeling value.Thus mobile terminal can judge user's mood at that time according to user feeling value, perform the operation preset.
It will be appreciated by those skilled in the art that the unit comprised for above-described embodiment two is carry out dividing according to function logic, but be not limited to above-mentioned division, as long as corresponding function can be realized; In addition, the concrete title of each functional unit, also just for the ease of mutual differentiation, is not limited to protection scope of the present invention.
Those of ordinary skill in the art it is also understood that, the all or part of step realized in above-described embodiment method is that the hardware that can carry out instruction relevant by program has come, described program can be stored in a computer read/write memory medium, described storage medium, comprises ROM/RAM, disk, CD etc.
Above content is in conjunction with concrete preferred implementation further description made for the present invention, can not assert that specific embodiment of the invention is confined to these explanations.For general technical staff of the technical field of the invention; make some equivalent alternative or obvious modification without departing from the inventive concept of the premise; and performance or purposes identical, all should be considered as belonging to the scope of patent protection that the present invention is determined by submitted to claims.

Claims (8)

1. an emotion identification method, is characterized in that, described emotion identification method comprises:
Affection recognition of image is carried out to the facial image got, obtains the first emotion value;
Speech emotion recognition is carried out to the voice signal got, obtains the second emotion value;
Emotion word is found out to obtain the 3rd emotion value corresponding to described emotion word from speech text, described speech text generates by adopting voice signal described in speech recognition technology process, the method calculating described 3rd emotion value comprises: determine one or more seed emotion word, and respectively with each described seed emotion word for root node sets up emotion tree, root node and the child node of described emotion tree are the emotion word with same class emotion; According to the position of described emotion word in the emotion tree comprising described emotion word, determine described 3rd emotion value;
Be multiplied by the first weight according to described first emotion value, described second emotion value is multiplied by the second weight and described 3rd emotion value be multiplied by the 3rd weight and, determine user feeling value.
2. emotion identification method as claimed in claim 1, it is characterized in that, the described facial image to getting carries out affection recognition of image, obtains the step of the first emotion value, is specially:
If find the expressive features of mating with the expressive features extracted from described facial image from expressive features storehouse, then export the described first emotion value that described expressive features is corresponding.
3. emotion identification method as claimed in claim 1, it is characterized in that, the described voice signal to getting carries out speech emotion recognition, obtains the step of the second emotion value, is specially:
If extract the phonetic feature identical with the phonetic feature that phonetic feature storehouse comprises from described voice signal, then export the described second emotion value that described phonetic feature is corresponding.
4. emotion identification method as claimed in claim 1, it is characterized in that, the method calculating described 3rd emotion value also comprises:
The emotion word of mating most with the emotion word meaning of a word in described speech text is found out in described emotion tree;
Described according to the position of described emotion word in the emotion tree comprising described emotion word, determine the step of described 3rd emotion value, be specially:
According to the position of described emotion word in the emotion tree comprising described emotion word that the meaning of a word mates most, determine described 3rd emotion value.
5. an emotion recognition device, is characterized in that, described emotion recognition device comprises:
First emotion value cell, for carrying out affection recognition of image to the facial image got, obtains the first emotion value;
Second emotion value cell, for carrying out speech emotion recognition to the voice signal got, obtains the second emotion value;
3rd emotion value cell, for finding out emotion word to obtain the 3rd emotion value corresponding to described emotion word from speech text, described speech text generates by adopting voice signal described in speech recognition technology process; Described 3rd emotion value cell also comprises the 3rd emotion value computing unit, described 3rd emotion value computing unit specifically comprises: emotion tree sets up unit, for determining one or more seed emotion word, and respectively with each described seed emotion word for root node sets up emotion tree, root node and the child node of described emotion tree are the emotion word with same class emotion; Determining unit, for according to the position of described emotion word in the emotion tree comprising described emotion word, determines described 3rd emotion value;
User feeling value cell, for being multiplied by the first weight according to described first emotion value, described second emotion value is multiplied by the second weight and described 3rd emotion value be multiplied by the 3rd weight and, determine user feeling value.
6. emotion recognition device as claimed in claim 5, is characterized in that, described first emotion value cell, specifically for:
If find the expressive features of mating with the expressive features extracted from described facial image from expressive features storehouse, then export the described first emotion value that described expressive features is corresponding.
7. emotion recognition device as claimed in claim 5, is characterized in that, described second emotion value cell, specifically for:
If extract the phonetic feature identical with the phonetic feature that phonetic feature storehouse comprises from described voice signal, then export the described second emotion value that described phonetic feature is corresponding.
8. emotion recognition device as claimed in claim 5, it is characterized in that, described 3rd emotion value computing unit also comprises:
Matching unit, for finding out the emotion word of mating most with the emotion word meaning of a word in described speech text in described emotion tree;
Described determining unit, specifically for:
According to the position of described emotion word in the emotion tree comprising described emotion word that the meaning of a word mates most, determine described 3rd emotion value.
CN201310394834.0A 2013-09-03 2013-09-03 A kind of emotion identification method and device Expired - Fee Related CN103456314B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310394834.0A CN103456314B (en) 2013-09-03 2013-09-03 A kind of emotion identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310394834.0A CN103456314B (en) 2013-09-03 2013-09-03 A kind of emotion identification method and device

Publications (2)

Publication Number Publication Date
CN103456314A CN103456314A (en) 2013-12-18
CN103456314B true CN103456314B (en) 2016-02-17

Family

ID=49738609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310394834.0A Expired - Fee Related CN103456314B (en) 2013-09-03 2013-09-03 A kind of emotion identification method and device

Country Status (1)

Country Link
CN (1) CN103456314B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107918637A (en) * 2016-10-11 2018-04-17 本田技研工业株式会社 service providing apparatus and service providing method

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104102627B (en) * 2014-07-11 2016-10-26 合肥工业大学 A kind of multi-modal noncontact sentiment analysis record system
CN104493827A (en) * 2014-11-17 2015-04-08 福建省泉州市第七中学 Intelligent cognitive robot and cognitive system thereof
CN104538043A (en) * 2015-01-16 2015-04-22 北京邮电大学 Real-time emotion reminder for call
CN105334743B (en) * 2015-11-18 2018-10-26 深圳创维-Rgb电子有限公司 A kind of intelligent home furnishing control method and its system based on emotion recognition
CN105404681A (en) * 2015-11-25 2016-03-16 广州酷狗计算机科技有限公司 Live broadcast sentiment classification method and apparatus
CN105488135B (en) * 2015-11-25 2019-11-15 广州酷狗计算机科技有限公司 Live content classification method and device
CN106910512A (en) * 2015-12-18 2017-06-30 株式会社理光 The analysis method of voice document, apparatus and system
CN107590503A (en) * 2016-07-07 2018-01-16 深圳狗尾草智能科技有限公司 A kind of robot affection data update method and system
CN107609458A (en) * 2016-07-20 2018-01-19 平安科技(深圳)有限公司 Emotional feedback method and device based on expression recognition
WO2018023516A1 (en) * 2016-08-04 2018-02-08 易晓阳 Voice interaction recognition and control method
CN106228989A (en) * 2016-08-05 2016-12-14 易晓阳 A kind of interactive voice identification control method
CN106251871A (en) * 2016-08-05 2016-12-21 易晓阳 A kind of Voice command music this locality playing device
CN106297826A (en) * 2016-08-18 2017-01-04 竹间智能科技(上海)有限公司 Speech emotional identification system and method
CN107871113B (en) * 2016-09-22 2021-06-25 南昌工程学院 Emotion hybrid recognition detection method and device
CN106503646B (en) * 2016-10-19 2020-07-10 竹间智能科技(上海)有限公司 Multi-mode emotion recognition system and method
CN106570496B (en) * 2016-11-22 2019-10-01 上海智臻智能网络科技股份有限公司 Emotion identification method and apparatus and intelligent interactive method and equipment
CN108115678B (en) * 2016-11-28 2020-10-23 深圳光启合众科技有限公司 Robot and motion control method and device thereof
CN108334806B (en) * 2017-04-26 2021-12-14 腾讯科技(深圳)有限公司 Image processing method and device and electronic equipment
WO2019001458A1 (en) * 2017-06-30 2019-01-03 腾讯科技(深圳)有限公司 Method and device for determining emotion information
CN108305643B (en) * 2017-06-30 2019-12-06 腾讯科技(深圳)有限公司 Method and device for determining emotion information
CN108305642B (en) * 2017-06-30 2019-07-19 腾讯科技(深圳)有限公司 The determination method and apparatus of emotion information
CN107976919B (en) * 2017-07-28 2019-11-15 北京物灵智能科技有限公司 A kind of Study of Intelligent Robot Control method, system and electronic equipment
CN107972028B (en) * 2017-07-28 2020-10-23 北京物灵智能科技有限公司 Man-machine interaction method and device and electronic equipment
CN107491435B (en) * 2017-08-14 2021-02-26 苏州狗尾草智能科技有限公司 Method and device for automatically identifying user emotion based on computer
CN107657950B (en) * 2017-08-22 2021-07-13 广州小鹏汽车科技有限公司 Automobile voice control method, system and device based on cloud and multi-command words
CN107818787B (en) * 2017-10-31 2021-02-05 努比亚技术有限公司 Voice information processing method, terminal and computer readable storage medium
CN108039181B (en) * 2017-11-02 2021-02-12 北京捷通华声科技股份有限公司 Method and device for analyzing emotion information of sound signal
CN108427916A (en) * 2018-02-11 2018-08-21 上海复旦通讯股份有限公司 A kind of monitoring system and monitoring method of mood of attending a banquet for customer service
CN108536802B (en) * 2018-03-30 2020-01-14 百度在线网络技术(北京)有限公司 Interaction method and device based on child emotion
CN109015666A (en) * 2018-06-21 2018-12-18 肖鑫茹 A kind of intelligent robot
CN110728983B (en) * 2018-07-16 2024-04-30 科大讯飞股份有限公司 Information display method, device, equipment and readable storage medium
CN109784414A (en) * 2019-01-24 2019-05-21 出门问问信息科技有限公司 Customer anger detection method, device and electronic equipment in a kind of phone customer service
CN110049155A (en) * 2019-03-29 2019-07-23 中至数据集团股份有限公司 Image display method, system, readable storage medium storing program for executing and mobile phone shell
CN110598648B (en) * 2019-09-17 2023-05-09 无锡慧眼人工智能科技有限公司 Video face detection method, video face detection unit and system
CN110910903B (en) * 2019-12-04 2023-03-21 深圳前海微众银行股份有限公司 Speech emotion recognition method, device, equipment and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604204A (en) * 2009-07-09 2009-12-16 北京科技大学 Distributed cognitive technology for intelligent emotional robot
CN102298694A (en) * 2011-06-21 2011-12-28 广东爱科数字科技有限公司 Man-machine interaction identification system applied to remote information service

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005182368A (en) * 2003-12-18 2005-07-07 Seiko Epson Corp Expression image estimating device, expression image estimating method and its program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101604204A (en) * 2009-07-09 2009-12-16 北京科技大学 Distributed cognitive technology for intelligent emotional robot
CN102298694A (en) * 2011-06-21 2011-12-28 广东爱科数字科技有限公司 Man-machine interaction identification system applied to remote information service

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于语音和人脸的情感识别研究;张石清;《中国博士学位论文全文数据库 信息科技辑》;20130515(第5期);第78页-第85页 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107918637A (en) * 2016-10-11 2018-04-17 本田技研工业株式会社 service providing apparatus and service providing method
CN107918637B (en) * 2016-10-11 2022-02-25 本田技研工业株式会社 Service providing apparatus and service providing method

Also Published As

Publication number Publication date
CN103456314A (en) 2013-12-18

Similar Documents

Publication Publication Date Title
CN103456314B (en) A kind of emotion identification method and device
CN108326855A (en) A kind of exchange method of robot, device, equipment and storage medium
CN106874265B (en) Content output method matched with user emotion, electronic equipment and server
CN105446146B (en) Intelligent terminal control method, system and intelligent terminal based on semantic analysis
CN107464555B (en) Method, computing device and medium for enhancing audio data including speech
CN112532897B (en) Video clipping method, device, equipment and computer readable storage medium
CN110491383A (en) A kind of voice interactive method, device, system, storage medium and processor
CN110602516A (en) Information interaction method and device based on live video and electronic equipment
CN107040452B (en) Information processing method and device and computer readable storage medium
CN102868830A (en) Switching control method and device of mobile terminal themes
CN110309254A (en) Intelligent robot and man-machine interaction method
CN112434139A (en) Information interaction method and device, electronic equipment and storage medium
WO2021134417A1 (en) Interactive behavior prediction method, intelligent device, and computer readable storage medium
CN104866308A (en) Scenario image generation method and apparatus
CN109710799B (en) Voice interaction method, medium, device and computing equipment
CN110706707B (en) Method, apparatus, device and computer-readable storage medium for voice interaction
CN104267922A (en) Information processing method and electronic equipment
CN111966212A (en) Multi-mode-based interaction method and device, storage medium and smart screen device
CN111177462B (en) Video distribution timeliness determination method and device
CN110825164A (en) Interaction method and system based on wearable intelligent equipment special for children
CN109656655A (en) It is a kind of for executing the method, equipment and storage medium of interactive instruction
CN109324515A (en) A kind of method and controlling terminal controlling intelligent electric appliance
CN110910898B (en) Voice information processing method and device
CN107480291B (en) emotion interaction method based on humor generation and robot system
CN111883101B (en) Model training and speech synthesis method, device, equipment and medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160217

Termination date: 20170903