AU2016277548A1 - A smart home control method based on emotion recognition and the system thereof - Google Patents

A smart home control method based on emotion recognition and the system thereof Download PDF

Info

Publication number
AU2016277548A1
AU2016277548A1 AU2016277548A AU2016277548A AU2016277548A1 AU 2016277548 A1 AU2016277548 A1 AU 2016277548A1 AU 2016277548 A AU2016277548 A AU 2016277548A AU 2016277548 A AU2016277548 A AU 2016277548A AU 2016277548 A1 AU2016277548 A1 AU 2016277548A1
Authority
AU
Australia
Prior art keywords
emotion
emotion recognition
recognition result
user
commendatory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU2016277548A
Inventor
Chunyuan FU
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Skyworth RGB Electronics Co Ltd
Original Assignee
Shenzhen Skyworth RGB Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Skyworth RGB Electronics Co Ltd filed Critical Shenzhen Skyworth RGB Electronics Co Ltd
Publication of AU2016277548A1 publication Critical patent/AU2016277548A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B15/00Systems controlled by a computer
    • G05B15/02Systems controlled by a computer electric
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/418Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS], computer integrated manufacturing [CIM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/237Lexical tools
    • G06F40/242Dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1815Semantic context, e.g. disambiguation of the recognition hypotheses based on word meaning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/06Decision making techniques; Pattern matching strategies
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/22Interactive procedures; Man-machine interfaces
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/26Pc applications
    • G05B2219/2642Domotique, domestic, home control, automation, smart house
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Abstract

A SMART HOME CONTROL METHOD BASED ON EMOTION RECOGNITION AND THE SYSTEM THEREOF Abstract The present invention discloses a smart home control method based on emotion recognition and the system thereof, wherein, the method comprises: acquiring a user's voice information before performing an emotion recognition for a speech tone on the voice information and generating a first emotion recognition result; after converting the said voice information into a text information, performing an emotion recognition for a semantics of the said text information before generating a second emotion recognition result; based on the said first emotion recognition result and the said second emotion recognition result, a user's emotion recognition result is generated according to a preset determination method for emotion recognition result; also, based on the said user's emotion recognition result, each smart home device is controlled to perform a corresponding operation. Automatically controlling the smart home devices through analyzing the user's current mood, and changing a surrounding environment, it owns a relatively good intelligence degree. Additionally, it adopts an integrated method of combining the speech tone recognition method and the semantic emotion analysis method together to further improve an accuracy of the emotion recognition.

Description

A SMART HOME CONTROL METHOD BASED ON EMOTION RECOGNITION AND THE SYSTEM THEREOF CROSS-REFERENCES TO RELATED APPLICATIONS This application is a national stage application of PCT Patent Application
No. PCT/CN2016/070270, filed on Jan. 6, 2016, which claims priority to
Chinese Patent Application No. 2015107991230, filed on Nov. 18, 2015, the content of all of which is incorporated herein by reference.
FIELD OF THE INVENTION
The present invention relates to the field of smart home technology, in particular to a smart home control method based on emotion recognition and the system thereof.
BACKGROUND
An existing smart home control method for operation mainly consists of a user login on a phone, a computer sending a plurality of instructions to a plurality of intelligent devices at home, before the intelligent devices executing the corresponding user instructions. Or a voice control may be applied, for example, the user says "turn on TV" to a microphone in the phone, the phone will then send a voice instruction to a smart TV (or an intelligent control device), controlling the TV to be turned on. Currently, there is also a plurality of image control devices, controlling an operation of the intelligent devices through a plurality of the user's different expressions recognized by face recognition in 1 an image recognition technology.
For the existing smart home control methods for operation, login on a cell phone and other terminals to operate the intelligent devices is troublesome and time-costing. And sometimes, it is needed to input a user name and a password, which is inconvenient for an elderly to operate, thus there is a problem of a security for an overall operation is not high.
Also, the existing methods for operation, still need obtain a clear instruction from the user before being able to complete the operation. An intelligent degree of the intelligent control method is still insufficient, and there is no way to achieve an automatic operation to the intelligent devices. These operation methods are not truly intelligent, they are able to neither process a relatively ambiguous instruction sent from the user, nor detect a user's mood or feeling, before adjusting a home environment intelligently.
Even adopting a relatively advanced face recognition method to control, limited by a bottleneck of an image face recognition technology, it is very difficult to achieve a real-time analysis and capture before obtaining a clear user's face image.
Therefore, the prior art has yet to be developed.
BRIEF SUMMARY OF THE DISCLOSURE
According to the above described defects, the purpose of the present invention is providing a smart home control method based on emotion 2 recognition and the system thereof, in order to solve the problem in the prior art that, the intelligent degree of the intelligent control methods is insufficient, and inconvenience for the user to operate.
In order to achieve the above mentioned goals, the technical solution of the present invention to solve the technical problem is as follows: A smart home control method based on emotion recognition, wherein, the said method comprises: acquiring a voice information from a user, before performing an emotion recognition for a speech tone on the said voice information and generating a first emotion recognition result; after converting the said voice information into a text information, performing an emotion recognition for a semantics of the said text information before generating a second emotion recognition result; based on the said first emotion recognition result and the said second emotion recognition result, a user’s emotion recognition result is generated according to a preset determination method for emotion recognition result; also, based on the said user’s emotion recognition result, each smart home device is controlled to perform a corresponding operation.
The said smart home control method based on emotion recognition, wherein, the step of: acquiring a voice information from a user, before performing an emotion recognition for a speech tone on the said voice 3 information and generating a first emotion recognition result, comprises specifically: after obtaining a user’s voice information, based on a Chinese emotional speech database for the detection of emotion variations, the speech tones of the said voice information is matched before generating the said first emotion recognition result.
The said smart home control method based on emotion recognition, wherein, after converting the said voice information into a text information, the said step of: performing an emotion recognition for a semantics of the said text information before generating a second emotion recognition result, comprises specifically: selecting a plurality of commendatory words acting as seeds and a plurality of derogatory words acting as seeds, before generating an emotion dictionary; calculating a similarity between the words in the said text information and the commendatory-seed-words together with the derogatory-seed-words in the said emotion dictionary, respectively; generating the said second emotion recognition result through a preset emotion recognition method for semantics, according to the said word similarity.
The said smart home control method based on emotion recognition, 4 wherein, the said step of calculating a similarity between the words in the said text information and the commendatory-seed-words together with the derogatory-seed-words in the said emotion dictionary, respectively, comprises specifically: based on a calculation method for semantic similarity, calculating respectively the word similarity between the words in the said text information and the said commendatory-seed-words, as well as the word similarity between the words in the said text information and the said derogatory-seed-words.
The said smart home control method based on emotion recognition, wherein, the said step of: generating the said second emotion recognition result through a preset emotion recognition method for semantics, according to the said word similarity, comprises specifically:
Calculating a word emotion tendency value through a word emotion tendency calculation formula: QG(w)= ^ similar ity(Kp , w)
M ^ similar ity(Kp, w)
Wherein, W denotes a word in the text information, Kp represents the commendatory-seed-word, M denotes a number of the commendatory-seed-words, Kn represents the derogatory-seed-word, N denotes a number of the derogatory-seed-words, QG(w) indicates a word 5 emotional tendency score;similarity(KR,w)denotes a word similarity degree between the words and the commendatory-seed-words; similarity(K^,w) denotes a word similarity degree between the words and the derogatory-seed-words; when the said word emotional tendency score is larger than a preset threshold, the words in the text information will be determined having a commendatory emotion; when the said word emotional tendency score is less than a preset threshold, the words in the text information will be determined having a derogatory emotion.
The said smart home control method based on emotion recognition, wherein, after the step of: based on the said first emotion recognition result and the said second emotion recognition result, a user’s emotion recognition result is generated according to a preset determination method for emotion recognition result; also, based on the said user’s emotion recognition result, control each smart home device to perform the corresponding operation, it further comprises: based on a preset database for speech feathers, matching the semantic feature of the said user’s voice information to determine a user’s identity.
The said smart home control method based on emotion recognition, wherein, the said first emotion recognition result comprises five levels of 6 emotion types including a high-level commendatory emotion, a low-level commendatory emotion, a neutral emotion, and a high-level derogatory emotion, as well as a low-level derogatory emotion; the emotion types included in the said second emotion recognition result are the same as that included in the first emotion recognition result.
The said smart home control method based on emotion recognition, wherein, the said method further comprises: when the said first emotion recognition result is a commendatory emotion, while the second emotion recognition result is a derogatory emotion or when the said first emotion recognition result is a derogatory emotion, while the second emotion recognition result is a commendatory emotion, recollecting the voice information of the current user; redoing the speech tone analysis and semantic emotion analysis for the current user’s voice information, and generating a new first emotion recognition result and a new second emotion recognition result.
The said smart home control method based on emotion recognition, wherein, the said preset emotion recognition result determination method comprises specifically: when the said first emotion recognition result and the second emotion recognition result are different levels of commendatory emotion, determining the current user emotion recognition result as a low level commendatory 7 emotion; when the first emotion recognition result and the second emotion recognition result are different levels of derogatory emotion, determining the current user emotion recognition result as a low level derogatory emotion; when one of the first emotion recognition result and the second emotion recognition result is a neutral emotion, while the other is a derogatory or commendatory emotion, determining the current user emotion recognition result as the said commendatory or derogatory emotion. A smart home control system based on emotion recognition, wherein, the said control system comprises: a first recognition and acquisition module, applied to acquiring a voice information from a user, generating the first emotion recognition result after the speech tone emotion recognition to the said voice information; a second recognition and acquisition module, applied to converting the said voice information into text information; generating the second emotion recognition result after the semantic emotion recognition to the said text information; a comprehensive emotion determination and control module, applied to generating the user emotion recognition result according to a preset determination method for emotion recognition result, based on the first emotion recognition result and the second emotion recognition result, and 8 controlling each smart home device to perform the corresponding operation according to the said user’s emotion recognition result.
Benefits: The present invention provides a smart home control method based on emotion recognition and the system thereof, it automatically controls the smart home devices through the method based on the user’s emotion recognition, and through analyzing the user's voices, intonation and sentence contents in chatting or controlling sentences with the equipment, it distinguishes the user's current mood from anger, impatience, neutral, joy and happiness etc., so as to automatically control the smart home devices, and improve the user's mood by changing the surrounding environment conditions, it has a relatively good intelligence degree, being able to tap an implicit information from the user's speech. In addition, it adopts an integrated method of combining the speech tone recognition method and the semantic emotion analysis method together to further improve an accuracy of the emotion recognition.
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 illustrates a flowchart of a preferred embodiment on a smart home control method based on emotion recognition as provided in the present invention; FIG. 2 illustrates a flowchart of a specific embodiment on a step S2 of a smart home control method based on emotion recognition as provided in the 9 present invention; FIG. 3 illustrates a flowchart of a specific embodiment on a smart home control system based on emotion recognition as provided in the present invention; FIG. 4 illustrates a flowchart of a specific embodiment on a voce tone recognition unit in a smart home control system based on emotion recognition as provided in the present invention; FIG.5 illustrates a flowchart of a specific embodiment on a text emotion recognition unit in a smart home control system based on emotion recognition as provided in the present invention; FIG.6 illustrates a functional block diagram of a preferred embodiment on a smart home control method based on emotion recognition as provided in the present invention.
DETAILED DESCRIPTION OF EMBODIMENTS
The invention provides a smart home control method based on emotion recognition and the system thereof. In order to make the purpose, technical solutions and the advantages of the present invention clearer and more explicit, further detailed descriptions of the present invention are stated here, referencing to the attached drawings and some embodiments of the present invention. It should be understood that the detailed embodiments of the invention described here are used to explain the present invention only, 10 instead of limiting the present invention.
As shown in FIG. 1, it is a flowchart of a preferred embodiment on a smart home control method based on emotion recognition as provided in the present invention. It should be noted that the method described in the present invention may be applied to any suitable and manageable smart home systems with a certain computing power, to improving an intelligibility of an interaction between the smart home systems and the user.
Some common improvements to the said method described in the present invention, or applying the present invention to other interaction fields between intelligent devices and users, including mobile phones, tablet computers, etc., are also belonging to substitutes or variations of conventional technical means in the field. A person skilled in the art may apply the method and system of the present invention to other suitable interaction areas between users and intelligent devices after some common changes, to improving the intelligence level of the intelligent devices.
The said method comprises: S1. acquiring a voice information from a user, before performing an emotion recognition for a speech tone on the said voice information and a first emotion recognition result is generated.
The said voice information may be a user's voice instruction, a user's voice conversation, or other suitable audio information that can be collected by the 11 device.
Prior to the emotion recognition for the audio information, it may also perform a plurality of pre-treatments such as Gaussian filtering and more, to reduce a processing difficulty of subsequent emotion recognitions. 52. After converting the said voice information into text information, performing an emotion recognition for a semantics of the said text information before generating a second emotion recognition result.
Specifically, the said first emotion recognition result may comprise five levels of emotion types, including a high-level commendatory emotion and a low-level commendatory emotion, a neutral emotion, as well as a high-level derogatory emotion and a low-level derogatory emotion; the said second emotion recognition result includes the same emotional types as the first emotion recognition result.
For the sake of simplicity, five terms are used here: "anger", "impatience", "neutral", "joy" and "happiness", corresponding to these five different levels of emotions. Of course, in order to further refine the emotion recognition result, further subdivisions or simplifications may be applied to the above said emotion types. 53. Generating a user’s emotion recognition result according to a preset determination method for emotion recognition result, based on the said first emotion recognition result and the said second emotion recognition result; 12 before controlling respective smart home devices to perform the corresponding operations, according to the said user’s emotion recognition result.
The said smart home appliance may comprise a plurality of suitable household devices, including a television, an air conditioner, etc., during a process of using the smart home appliances, some devices, such as an air purifier, a coffee maker, etc., may also be added or removed.
Since emotion recognition is a relatively complex problem for computers, therefore, in the step S3, applying a method of integrating the first emotion recognition result obtained in the step S1 and the second emotion recognition result obtained in the step S2 may avoid an error caused by a single emotion recognition algorithm, and may obtain a more accurate result, making a finally generated result on a user’s emotion recognition not significantly different from a real situation (for example, when the user’s emotion is a significant derogatory emotion, it is identified as a commendatory emotion).
Specifically, the said corresponding operation refers to an operation corresponding to a current mood of the user. For example, when the user’s current mood is identified as impatience, the smart home system will automatically turn on a soothing music, open an air purifier, and send an instruction to an AC to lower a room temperature a little bit, after the current room temperature is detected relatively high by a temperature sensor, before making the user calm down. 13
At the same time, a plurality of tips for removing irritability and keeping healthy will be shown on a TV screen according to a current season and climate; It may also automatically send an instruction on making a cup of milk for the user, and even it may change a color of a wall and an indoor light to adjust the user’s mood.
While the mood is identified as happiness, it will automatically broadcast a plurality of news and recommended movies for the user; it may also suggest the user doing some aerobic exercises to maintain a good body health; or it will play a dynamic music and so on. If the user's current mood is identified as neutral: the smart home system will automatically play a small joke, recommend a plurality of comedy movies, or make a coffee for the user and so on. A plurality of the above said actions of the intelligent devices are performed in a coherent manner and operated by a control system of the smart home. The specific action is determined by an actual situation (for example, type of the intelligent device, a plurality of intelligent devices manageable, etc.). Of course, it may also adopt a user-defined method, combining freely a plurality of events according to the user’s own habit.
Specifically, the said step of acquiring a voice information from a user, before performing an emotion recognition for the speech tone of the said voice information and generating a first emotion recognition result (that is, a speech tone emotion recognition), comprises specifically: 14 after acquiring a voice information of the user, based on a Chinese emotional speech database for the detection of emotion variations, the voice intone of the said voice information is matched and thereby the first emotion recognition result is generated. A detailed introduction of the said Chinese emotional speech database for the detection of emotion variations (i.e., a CESD speech database) may be referenced to a paper of “Chinese emotional speech database for the detection of emotion variations”, by: Lu Xu, Mingxing Xu, from "NCMMSC2009
The CESD speech database has recorded 1200 utterances in a form of dialogues between man and woman, with 20 emotional variation modes consisting of 5 different basic emotions including anger, impatience, neutral, joy, and happiness. Besides the utterances, the database further includes a plurality of corresponding label files including a plurality of silence or effective speech segments, emotional classes, emotional variation segments, and emotional qualities, as well as a plurality of feature files with acoustical features stored together in the same database. It may make a pretty good emotional recognition for a user’s speech.
More specifically, as shown in Fig. 2, the said step of performing an emotion recognition for a semantics of the said text information before generating a second emotion recognition result, after converting the said voice information into a text information (i.e., the semantics emotional recognition), further includes: 15 S21, selecting a plurality of commendatory words acting as seeds and a plurality of derogatory words acting as seeds, before generating an emotion dictionary. The said emotion dictionary mainly includes two categories of emotional words dictionary and emotional phrases dictionary.
Wherein, the said emotional words dictionary is composed by a plurality of words with emotional characteristics, those words such as “love”, “hate” and other words, are all belonging to a vocabulary of the emotional words dictionary. The emotional phrases dictionary is composed by phases with emotional characteristics, those phases such as “in great delights”, “have one's nose in the air” and other terms, are all belonging to the emotional phases dictionary.
In the emotion dictionary, terms (including emotional words and emotional phrases) are usually divided into three categories: commendatory (e.g., beauty, happy, etc.), derogatory (e.g., ugly, depressed, etc.) and neutral (such as computer, work, etc.). S22, calculating a similarity between the words in the said text information and the commendatory-seed-words together with the derogatory-seed-words in the said emotion dictionary, respectively. A value of an emotional weight (also known as an emotional tendency) of an emotional word is closely related to closeness between the word and the seed-word (i.e., a similarity between words). 16
The term "seed-word" used here is denoting a very significant, strong, and representative term. It may be considered that the closer the relationship between a word and the derogatory-seed-word is, the more significant the derogatory tendency of the word is. Similarly, the closer the relationship between a word and the commendatory-seed-word is, the more significant the commendatory tendency of the word is.
Specifically, the word similarity may be calculated between the words in the said text information and the said commendatory-seed-words, as well as the word similarity between the words in the said text information and the said derogatory-seed-words, according to a semantic similarity calculation method. The word similarity calculation in HowNet is based on an original meaning of the word. HowNet has composed a tree by a plurality of sememes in a same category, thus converting a sememe similarity calculation into a semantic distance calculation between the sememes in the tree. Assuming that a path distance of two sememes in this hierarchical system is d, then the semantic
SimCp!, p2) = —— distance of the two sememes is: d + a , wherein: p1; p2 stands for the sememe; a is an adjustable parameter. A word has several sememes in HowNet, when the word similarity is calculated based on this, a maximum of a similarity degree in sememes is taken as the similarity degree in the words. For two Chinese words, Wj, w2 .assuming that each of them has more than one sememe, the sememes of
Wj are su,s12,...sln , the sememes of w2 are s21,s22,...s2n . The similarity 17 calculation formula is as follows:
Similarity^,, w2) = max., j...,,,(sri, s, s) (1)
In order to be corresponding to the above said five levels of emotion types, it may define an emotional polarity for each emotion word. That is, the emotional polarity is divided into two levels of strong and weak. The emotional polarity indicates the semantic similarity degree between words, the higher the similarity is, the stronger the polarity is, and vice versa. S23. generating the said second emotion recognition result through a preset emotion recognition method for semantics, according to the said word similarity.
More specifically, the step (S23) of generating the said second emotion recognition result through a preset emotion recognition method for semantics, according to the said word similarity includes specifically:
Calculating a word emotion tendency value through a word emotion tendency calculation formula: η n Σ similarity(Kp , w) ^ similarity(Kn, w) QG(w)=-i---:---!--- (2)
Μ N
Wherein, W denotes a word in the text information, Kp represents the commendatory-seed-word, M denotes a number of the commendatory-seed-words, Kn represents the derogatory-seed-word, N denotes a number of the derogatory-seed-words, QG(w) indicates a word 18 emotional tendency score; similarity(Kp ,w) denotes the word similarity degree between the words and the commendatory-seed-words; similarity(K^,w) denotes the word similarity degree between the words and the derogatory-seed-words N and M are both positive integers, which may be equal or unequal.
When the said word emotional tendency score is larger than a preset threshold, the word in the text information is determined having a commendatory emotion. While the said word emotional tendency score is less than a preset threshold, the word in the text information is determined having a derogatory emotion.
Further, the words in the commendatory words are divided into strong and weak levels according to their values between [0, 1], and the words in the derogatory words are directly divided into strong and weak words according to their values between [-1, 0], which are corresponding to the above said five levels of emotional types: anger, impatience, neutral, joy, and happiness, respectively. For example, if the polarity value of a word is larger than 0.5, it is happiness; if less than 0.5, it is joy. And if the polarity value is larger than -0.5, then it is impatience; if it is less than -0.5, it is anger.
Preferably, after the said stepS3, the method further comprises: based on a preset database for speech feathers, matching the semantic feature of the said user’s voice information to determine a user’s identity. 19
That is, constructing a voice features database by pre-recording the voice samples and extracting a unique feature for each sample, before matching the voice for detection with the features in the database, and verifying the identity of a speaker by analysis and calculation. The above said user’s verification method with a voiceprint is a user-friendly operation, which needs no memorization of a user’s ID and password. Also, it has a better security, and may ensure an accurate identification of a user’s identity.
In a specific embodiment of the invention, the said method further comprises: when the said first emotion recognition result is a commendatory emotion, and the second emotion recognition result is a derogatory emotion, or when the said first emotion recognition result is a derogatory emotion, and the second emotion recognition result is a commendatory emotion, the speech information of the current user shall be recollected; redoing the speech tone analysis (S1) and the semantic emotion analysis (S2) for the current user’s voice information, before generating a new first emotion recognition result and a new second emotion recognition result.
Due to a complexity of the emotion recognition, there may be a contradictory situation between two emotion recognition results. In this case, in order to ensure an accuracy of the recognition results, recollecting the data and redo the identification is a better approach.
Specifically, the emotion recognition result determination method preset in 20 the step S3 is specifically stated as follows: when the said first emotion recognition result and the second emotion recognition result are different levels of commendatory emotion, determining the current user emotion recognition result as a low level commendatory emotion; when the first emotion recognition result and the second emotion recognition result are different levels of derogatory emotion, determining the current user emotion recognition result as a low level derogatory emotion; when one of the first emotion recognition result and the second emotion recognition result is a neutral emotion, and the other is a derogatory or commendatory emotion, then the current user emotion recognition result is determined as the said commendatory or derogatory emotion.
All together, when both the first emotion recognition result and the second emotion recognition result have an emotion tendency (commendatory or derogatory), it adopts a degradation method by choosing a lower emotion type. And when one of the two is a neutral result, choosing the result with emotional tendency.
After applying this method, a plurality of determination results obtained corresponding to the above said five emotion types are shown in table 1 as follows: 21
The first emotion The second emotion The user emotion recognition result recognition result recognition result anger anger anger anger impatience impatience impatience impatience impatience happiness happiness happiness happiness joy joy joy joy joy anger neutral anger impatience neutral impatience happiness neutral happiness joy neutral joy neutral neutral neutral
Tablel FIG. 3 illustrates a flowchart of a specific embodiment on a smart home control system based on emotion recognition as provided in the present invention, as shown in FIG.3, the specific embodiment comprises the following steps: S100. when the user inputs an instruction or a chat content by voice, the smart home system verifies the user’s identity through a voiceprint recognition unit while chatting with the user after receiving the user's voice. If it is a legitimate user, then go to a step S200. 22
Otherwise, it will record the user’s voiceprint information through chatting with the user, and make the user legal. S200. The voiceprint recognition unit sends the voice to the voice intonation recognition unit to extract a voice intonation feather for determining the user’s current emotion state (that is, an emotion recognition result), and inputs the recognition result into a comprehensive emotion determination module. The module then sends the user’s voice to a voice-to-text module before converting into texts. S300. The text is then input into the semantic recognition unit for semantic emotion recognition, and the recognition result is then transmitted to the comprehensive emotion determination unit. S400. The comprehensive emotion determination unit determines the current emotion state of the user according to the method of the degradation selection, and then sends the emotion state to the intelligent device control unit. S500.After the intelligent device control unit receives the user’s emotion results, the smart home devices are automatically controlled according to the user preset information and the current environmental information as well as other information.
Wherein, as shown in Fig. 4, the specific process of the voice intonation recognition includes: 23 S210. analyzing the user's audio files, extracting the voice intonation features from the audio file. S220. comparing the feature with the data characteristics in the CESD voice database. S230. determining if it is the same as a certain emotion characteristic in the database, and if it is, go to step S240 .
If it is not, (that is, It fails to match any emotional characteristics in the database), then a re-acquiring the user’s speech analysis features before going back to the step S210. S240. obtaining the user’s emotion recognition result.
Further, as shown in FIG.5, the specific process of the text emotion recognition unit includes: S310.matching the words in the text information with the words in the emotion dictionary database, if there is a commendatory word, then go to S320; if there is a derogatory word, then go to S330; if none exists, the sentence is determined as the neutral emotion (S350); S32(K determining the polarity value of the emotion word, if the polarity value is larger than 0.5, then it is happiness; if the polarity value isless than 0.5,then it is joy. S330n determining the polarity value of the emotion word , if the polarity value is larger than-0.5, then it is impatience; if the polarity value isless than 24 -0.5,then it is anger. S340, when there is a negative prefix or a negative word before the emotion word, the emotion recognition result is determined to be the opposite emotion type (for example, happiness is corresponding to anger, joy is corresponding to impatience).
Based on the above said embodiments, the present invention further provides a smart home control system based on emotion recognition. As shown in FIG.6, the said smart home control system based on emotion recognition includes: a first recognition and acquisition module 10, applied to acquiring a voice information from a user, generating the first emotion recognition result after the speech tone emotion recognition to the said voice information; a second recognition and acquisition module, applied to converting the said voice information into text information; generating the second emotion recognition result after the semantic emotion recognition to the said text information; a comprehensive emotion determination and control module, applied to generating the user emotion recognition result according to a preset determination method for emotion recognition result, based on the first emotion recognition result and the second emotion recognition result, and controlling each smart home device to perform the corresponding operation 25 according to the said user’s emotion recognition result.
In specific implementations, as shown in FIG. 6, the said first identification and acquisition module 10 may include a voiceprint recognition unit 100 which is applied to acquiring the user’s voice information, and a voice intonation emotion recognition unit 200 for performing the voice intonation emotion recognition of the said voice information to generate the first emotion recognition result.
The said second identification and acquisition module 20 may include a voice and text conversion unit 300, applied to converting the said voice information into the text information, and a semantic emotion recognition unit 400, applied to generating the second emotion recognition result for the semantic emotion recognition of the said text information.
The said comprehensive emotion determination and control module 30 may include a comprehensive emotion determination unit 500,applied to generating a user’s emotion recognition result according to a predetermined emotion recognition result determination method, and an intelligent device control unit 600, applied to controlling each smart home device to perform the corresponding operation according to the said user emotion recognition result, based on the said first emotion recognition result and the said second emotion recognition result.
The present invention provides a smart home control method based on 26 emotion recognition and the system thereof, automatically controls the smart home devices through the method based on the user’s emotion recognition, and analyzes the user's voices, intonation and sentence contents in chatting or controlling sentences with the equipment, to distinguish the user's current mood from an angry, anxious, neutral, pleasant and happy status etc., so as to automatically control the smart home devices, and improving the user's mood by change the surrounding environment conditions, which has a relatively good intelligence degree, being able to tap an implicit information from the user's voice. In addition, it adopts an integrated method of combining the speech tone recognition and the semantic emotion analysis methods to further improve an accuracy of the emotion recognition.
All above, the present invention provides a smart home control method based on emotion recognition and the system thereof, it automatically controls the smart home devices through the method based on the user’s emotion recognition, and through analyzing the user's voices, intonation and sentence contents in chatting or controlling sentences with the equipment, it distinguishes the user's current mood from anger, impatience, neutral, joy and happiness etc., so as to automatically control the smart home devices, and improve the user's mood by changing the surrounding environment conditions, it has a relatively good intelligence degree, being able to tap an implicit information from the user's speech. In addition, it adopts an integrated method of combining the speech tone recognition method and the semantic emotion 27 analysis method together to further improve an accuracy of the emotion recognition.
It should be understood that, the application of the present invention is not limited to the above examples listed. Ordinary technical personnel in this field can improve or change the applications according to the above descriptions, all of these improvements and transforms should belong to the scope of protection in the appended claims of the present invention. 28

Claims (15)

  1. What is claimed is:
    1. A smart home control method based on emotion recognition, wherein, the method comprises: acquiring a voice information from a user, before performing an emotion recognition for a speech tone on the voice information and generating a first emotion recognition result; converting the voice information into a text information, then performing an emotion recognition for a semantics of the text information before generating a second emotion recognition result; based on the first emotion recognition result and the second emotion recognition result, a user’s emotion recognition result is generated according to a preset determination method for emotion recognition result; also, based on the user’s emotion recognition result, each smart home device is controlled to perform a corresponding operation.
  2. 2. The smart home control method based on emotion recognition as claimed in claim 1, wherein, the step of: acquiring a voice information from a user, before performing an emotion recognition for a speech tone on the voice information and generating a first emotion recognition result, comprises specifically: after obtaining a user’s voice information, the speech tones of the voice information is matched according to a Chinese emotional speech database for the detection of emotion variations, and the first emotion recognition result is then generated.
  3. 3. The smart home control method based on emotion recognition according to claim 1, wherein, after the step of: based on the first emotion recognition result and the second emotion recognition result, a user’s emotion recognition result is generated according to a preset determination method for emotion recognition result; also, based on the user’s emotion recognition result, control each smart home device to perform the corresponding operation, it further comprises: based on a preset database for speech feathers, matching the semantic feature of the user’s voice information to determine a user’s identity.
  4. 4. The smart home control method based on emotion recognition, wherein, the first emotion recognition result comprises five levels of emotion types including a high-level commendatory emotion, a low-level commendatory emotion, a neutral emotion, and a high-level derogatory emotion, as well as a low-level derogatory emotion; the emotion types included in the second emotion recognition result are the same as that included in the first emotion recognition result.
  5. 5. The smart home control method based on emotion recognition according to claim 4, wherein, the method further comprises: when the first emotion recognition result is a commendatory emotion, while the second emotion recognition result is a derogatory emotion or when the first emotion recognition result is a derogatory emotion, while the second emotion recognition result is a commendatory emotion, recollecting the voice information of the current user; redoing the speech tone analysis and semantic emotion analysis for the current user’s voice information, and generating a new first emotion recognition result and a new second emotion recognition result.
  6. 6. The smart home control method based on emotion recognition according to claim 4, wherein, the preset emotion recognition result determination method comprises specifically: when the first emotion recognition result and the second emotion recognition result have different levels of commendatory emotion, determining the current user emotion recognition result as a low level commendatory emotion; when the first emotion recognition result and the second emotion recognition result are different levels of derogatory emotion, determining the current user emotion recognition result as a low level derogatory emotion; when one of the first emotion recognition result and the second emotion recognition result is a neutral emotion, while the other is a derogatory or commendatory emotion, determining the current user emotion recognition result as the said commendatory or derogatory emotion.
  7. 7. A smart home control method based on emotion recognition, wherein, the method comprises: acquiring a voice information from a user, before performing an emotion recognition for a speech tone of the voice information and generating a first emotion recognition result; converting the voice information into a text information, then performing an emotion recognition for a semantics of the text information before generating a second emotion recognition result; based on the first emotion recognition result and the second emotion recognition result, a user’s emotion recognition result is generated according to a preset determination method for emotion recognition result; also, based on the user’s emotion recognition result, each smart home device is controlled to perform a corresponding operation; after converting the voice information into a text information, the step of performing an emotion recognition for a semantics of the text information before generating a second emotion recognition result, comprises specifically: selecting a plurality of commendatory words acting as seeds and a plurality of derogatory words acting as seeds, before generating an emotion dictionary; calculating a similarity between the words in the text information and the commendatory-seed-words together with the derogatory-seed-words in the emotion dictionary, respectively; generating the second emotion recognition result through a preset emotion recognition method for semantics, according to the word similarity.
  8. 8. The smart home control method based on emotion recognition as claimed in claim 7, wherein, the step of calculating a similarity between the words in the text information and the commendatory-seed-words together with the derogatory-seed-words in the emotion dictionary, respectively, comprises specifically: based on a calculation method for semantic similarity, calculating respectively the word similarity between the words in the text information and the commendatory-seed-words, as well as the word similarity between the words in the text information and the derogatory-seed-words.
  9. 9. The smart home control method based on emotion recognition according to claim 8, wherein, the step of: generating the second emotion recognition result through a preset emotion recognition method for semantics, according to the word similarity, comprises specifically: Calculating a word emotion tendency value through a word emotion tendency calculation formula:
    wherein, W denotes a word in the text information, Kp represents the commendatory-seed-word, M denotes a number of the commendatory-seed-words, Kn represents the derogatory-seed-word, N denotes a number of the derogatory-seed-words, QG(w) indicates a word emotional tendency score; similarity(Kp ,w) denotes a word similarity degree between the words and the commendatory-seed-words;similarity(K^,w)denotes a word similarity degree between the words and the derogatory-seed-words; when the word emotional tendency score is larger than a preset threshold, the word in the text information will be determined having a commendatory emotion; when the word emotional tendency score is less than a preset threshold, the word in the text information will be determined having a derogatory emotion.
  10. 10. The smart home control method based on emotion recognition according to claim 7, wherein, the step of: acquiring a voice information from a user, before performing an emotion recognition for a speech tone on the voice information and generating a first emotion recognition result, comprises specifically: after acquiring a voice information of the user, based on a Chinese emotional speech database for the detection of emotion variations, the voice intone of the said voice information is matched and thereby the first emotion recognition result is generated.
  11. 11. The smart home control method based on emotion recognition according to claim 7, wherein, after the step of: based on the first emotion recognition result and the second emotion recognition result, a user’s emotion recognition result is generated according to a preset determination method for emotion recognition result; also, based on the user’s emotion recognition result, control each smart home device to perform the corresponding operation, it further comprises: based on a preset database for speech feathers, matching the semantic feature of the user’s voice information to determine an identity of the user.
  12. 12. The smart home control method based on emotion recognition according to claim 7, wherein, the first emotion recognition result comprises five levels of emotion types including a high-level commendatory emotion and a low-level commendatory emotion, a neutral emotion, as well as a high-level derogatory emotion and a low-level derogatory emotion; the emotion types included in the second emotion recognition result are the same as that included in the first emotion recognition result.
  13. 13. The smart home control method based on emotion recognition according to claim 12, wherein, the method further comprises: when the first emotion recognition result is a commendatory emotion, while the second emotion recognition result is a derogatory emotion, or when the first emotion recognition result is a derogatory emotion, while the second emotion recognition result is a commendatory emotion, recollecting the voice information of the current user; redoing the speech tone analysis and semantic emotion analysis for the current user’s voice information, and generating a new first emotion recognition result and a new second emotion recognition result.
  14. 14. The smart home control method based on emotion recognition according to claim 12, wherein, the preset emotion recognition result determination method comprises specifically: when the first emotion recognition result and the second emotion recognition result are different levels of commendatory emotion, determining the current user emotion recognition result as a low level commendatory emotion; when the first emotion recognition result and the second emotion recognition result are different levels of derogatory emotion, determining the current user emotion recognition result as a low level derogatory emotion; when one of the first emotion recognition result and the second emotion recognition result is a neutral emotion, and the other is a derogatory or commendatory emotion, determining the current user emotion recognition result as the said commendatory or derogatory emotion.
  15. 15. A smart home control system based on emotion recognition, wherein, the control system comprises: a first recognition and acquisition module, applied to acquiring a voice information from a user, generating the first emotion recognition result after the speech tone emotion recognition to the voice information; a second recognition and acquisition module, applied to converting the voice information into text information; then generating the second emotion recognition result after the semantic emotion recognition to the text information; a comprehensive emotion determination and control module, applied to generating the user emotion recognition result according to a preset determination method for emotion recognition result, based on the first emotion recognition result and the second emotion recognition result, and controlling each smart home device to perform the corresponding operation according to the user’s emotion recognition result.
AU2016277548A 2015-11-18 2016-01-06 A smart home control method based on emotion recognition and the system thereof Abandoned AU2016277548A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201510799123.0A CN105334743B (en) 2015-11-18 2015-11-18 A kind of intelligent home furnishing control method and its system based on emotion recognition
CN2015107991230 2015-11-18
PCT/CN2016/070270 WO2017084197A1 (en) 2015-11-18 2016-01-06 Smart home control method and system based on emotion recognition

Publications (1)

Publication Number Publication Date
AU2016277548A1 true AU2016277548A1 (en) 2017-06-01

Family

ID=55285356

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2016277548A Abandoned AU2016277548A1 (en) 2015-11-18 2016-01-06 A smart home control method based on emotion recognition and the system thereof

Country Status (4)

Country Link
US (1) US10013977B2 (en)
CN (1) CN105334743B (en)
AU (1) AU2016277548A1 (en)
WO (1) WO2017084197A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110890088A (en) * 2019-10-12 2020-03-17 中国平安财产保险股份有限公司 Voice information feedback method and device, computer equipment and storage medium
CN113113047A (en) * 2021-03-17 2021-07-13 北京大米科技有限公司 Audio processing method and device, readable storage medium and electronic equipment

Families Citing this family (81)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107293292A (en) * 2016-03-31 2017-10-24 深圳光启合众科技有限公司 Equipment and its operating method based on high in the clouds
CN107590503A (en) * 2016-07-07 2018-01-16 深圳狗尾草智能科技有限公司 A kind of robot affection data update method and system
CN106203344A (en) * 2016-07-12 2016-12-07 北京光年无限科技有限公司 A kind of Emotion identification method and system for intelligent robot
WO2018023515A1 (en) * 2016-08-04 2018-02-08 易晓阳 Gesture and emotion recognition home control system
WO2018023523A1 (en) * 2016-08-04 2018-02-08 易晓阳 Motion and emotion recognizing home control system
WO2018023513A1 (en) * 2016-08-04 2018-02-08 易晓阳 Home control method based on motion recognition
CN106251866A (en) * 2016-08-05 2016-12-21 易晓阳 A kind of Voice command music network playing device
CN106297783A (en) * 2016-08-05 2017-01-04 易晓阳 A kind of interactive voice identification intelligent terminal
CN106200396A (en) * 2016-08-05 2016-12-07 易晓阳 A kind of appliance control method based on Motion Recognition
CN106019977A (en) * 2016-08-05 2016-10-12 易晓阳 Gesture and emotion recognition home control system
CN106228989A (en) * 2016-08-05 2016-12-14 易晓阳 A kind of interactive voice identification control method
CN106125566A (en) * 2016-08-05 2016-11-16 易晓阳 A kind of household background music control system
CN106125565A (en) * 2016-08-05 2016-11-16 易晓阳 A kind of motion and emotion recognition house control system
CN106200395A (en) * 2016-08-05 2016-12-07 易晓阳 A kind of multidimensional identification appliance control method
WO2018027505A1 (en) * 2016-08-09 2018-02-15 曹鸿鹏 Lighting control system
WO2018027504A1 (en) * 2016-08-09 2018-02-15 曹鸿鹏 Lighting control method
WO2018027507A1 (en) * 2016-08-09 2018-02-15 曹鸿鹏 Emotion recognition-based lighting control system
CN106297826A (en) * 2016-08-18 2017-01-04 竹间智能科技(上海)有限公司 Speech emotional identification system and method
CN106354036B (en) * 2016-08-30 2019-04-30 广东美的制冷设备有限公司 Household electric appliance control method and device
CN106227054A (en) * 2016-08-30 2016-12-14 广东美的制冷设备有限公司 A kind of temperature-controlled process based on user feeling, system and household electrical appliances
DE102016216407A1 (en) * 2016-08-31 2018-03-01 BSH Hausgeräte GmbH Individual communication support
CN106373569B (en) * 2016-09-06 2019-12-20 北京地平线机器人技术研发有限公司 Voice interaction device and method
CN106503646B (en) * 2016-10-19 2020-07-10 竹间智能科技(上海)有限公司 Multi-mode emotion recognition system and method
CN106444452B (en) * 2016-10-31 2019-02-01 广东美的制冷设备有限公司 Household electric appliance control method and device
CN106557461B (en) * 2016-10-31 2019-03-12 百度在线网络技术(北京)有限公司 Semantic analyzing and processing method and device based on artificial intelligence
US10783883B2 (en) * 2016-11-03 2020-09-22 Google Llc Focus session at a voice interface device
CN106776539A (en) * 2016-11-09 2017-05-31 武汉泰迪智慧科技有限公司 A kind of various dimensions short text feature extracting method and system
CN106782615B (en) * 2016-12-20 2020-06-12 科大讯飞股份有限公司 Voice data emotion detection method, device and system
CN106992012A (en) * 2017-03-24 2017-07-28 联想(北京)有限公司 Method of speech processing and electronic equipment
CN106910514A (en) * 2017-04-30 2017-06-30 上海爱优威软件开发有限公司 Method of speech processing and system
CN107293309B (en) * 2017-05-19 2021-04-30 四川新网银行股份有限公司 Method for improving public opinion monitoring efficiency based on client emotion analysis
CN107358967A (en) * 2017-06-08 2017-11-17 广东科学技术职业学院 A kind of the elderly's speech-emotion recognition method based on WFST
JP7073640B2 (en) * 2017-06-23 2022-05-24 カシオ計算機株式会社 Electronic devices, emotion information acquisition systems, programs and emotion information acquisition methods
CN109254669B (en) * 2017-07-12 2022-05-10 腾讯科技(深圳)有限公司 Expression picture input method and device, electronic equipment and system
CN109429415B (en) * 2017-08-29 2020-09-15 美智光电科技有限公司 Illumination control method, device and system
CN109429416B (en) * 2017-08-29 2020-09-15 美智光电科技有限公司 Illumination control method, device and system for multi-user scene
JP7103769B2 (en) * 2017-09-05 2022-07-20 京セラ株式会社 Electronic devices, mobile terminals, communication systems, watching methods, and programs
CN107844762A (en) * 2017-10-25 2018-03-27 大连三增上学教育科技有限公司 Information processing method and system
CN108039181B (en) * 2017-11-02 2021-02-12 北京捷通华声科技股份有限公司 Method and device for analyzing emotion information of sound signal
US10783329B2 (en) * 2017-12-07 2020-09-22 Shanghai Xiaoi Robot Technology Co., Ltd. Method, device and computer readable storage medium for presenting emotion
CN108269570B (en) * 2018-01-17 2020-09-11 深圳聚点互动科技有限公司 Method, device, equipment and storage medium for voice control of background music host
CN108446931B (en) * 2018-03-21 2020-04-10 雅莹集团股份有限公司 Garment enterprise management system and fitting feedback method based on same
US10958466B2 (en) * 2018-05-03 2021-03-23 Plantronics, Inc. Environmental control systems utilizing user monitoring
CN108962255B (en) * 2018-06-29 2020-12-08 北京百度网讯科技有限公司 Emotion recognition method, emotion recognition device, server and storage medium for voice conversation
CN108563941A (en) * 2018-07-02 2018-09-21 信利光电股份有限公司 A kind of intelligent home equipment control method, intelligent sound box and intelligent domestic system
CN110728983B (en) * 2018-07-16 2024-04-30 科大讯飞股份有限公司 Information display method, device, equipment and readable storage medium
US11037573B2 (en) 2018-09-05 2021-06-15 Hitachi, Ltd. Management and execution of equipment maintenance
CN110896422A (en) * 2018-09-07 2020-03-20 青岛海信移动通信技术股份有限公司 Intelligent response method and device based on voice
KR102252195B1 (en) * 2018-09-14 2021-05-13 엘지전자 주식회사 Emotion Recognizer, Robot including the same and Server including the same
CN110970019A (en) * 2018-09-28 2020-04-07 珠海格力电器股份有限公司 Control method and device of intelligent home system
CN111199732B (en) * 2018-11-16 2022-11-15 深圳Tcl新技术有限公司 Emotion-based voice interaction method, storage medium and terminal equipment
JP2020091302A (en) * 2018-12-03 2020-06-11 本田技研工業株式会社 Emotion estimation device, emotion estimation method, and program
CN109753663B (en) * 2019-01-16 2023-12-29 中民乡邻投资控股有限公司 Customer emotion grading method and device
CN109784414A (en) * 2019-01-24 2019-05-21 出门问问信息科技有限公司 Customer anger detection method, device and electronic equipment in a kind of phone customer service
CN109767787B (en) * 2019-01-28 2023-03-10 腾讯科技(深圳)有限公司 Emotion recognition method, device and readable storage medium
CN109712625A (en) * 2019-02-18 2019-05-03 珠海格力电器股份有限公司 Smart machine control method based on gateway, control system, intelligent gateway
US10855483B1 (en) * 2019-03-21 2020-12-01 Amazon Technologies, Inc. Device-state quality analysis
CN110188361A (en) * 2019-06-10 2019-08-30 北京智合大方科技有限公司 Speech intention recognition methods and device in conjunction with text, voice and emotional characteristics
CN110444229A (en) * 2019-06-17 2019-11-12 深圳壹账通智能科技有限公司 Communication service method, device, computer equipment and storage medium based on speech recognition
CN110262665A (en) * 2019-06-26 2019-09-20 北京百度网讯科技有限公司 Method and apparatus for output information
CN111028827B (en) * 2019-12-10 2023-01-24 深圳追一科技有限公司 Interaction processing method, device, equipment and storage medium based on emotion recognition
CN111192585A (en) * 2019-12-24 2020-05-22 珠海格力电器股份有限公司 Music playing control system, control method and intelligent household appliance
CN111179903A (en) * 2019-12-30 2020-05-19 珠海格力电器股份有限公司 Voice recognition method and device, storage medium and electric appliance
CN113154783A (en) * 2020-01-22 2021-07-23 青岛海尔电冰箱有限公司 Refrigerator interaction control method, refrigerator and computer readable storage medium
CN113641106A (en) * 2020-04-27 2021-11-12 青岛海尔多媒体有限公司 Method and device for environment regulation and control and television
CN111710349B (en) * 2020-06-23 2023-07-04 长沙理工大学 Speech emotion recognition method, system, computer equipment and storage medium
CN112151034B (en) * 2020-10-14 2022-09-16 珠海格力电器股份有限公司 Voice control method and device of equipment, electronic equipment and storage medium
CN111968679B (en) * 2020-10-22 2021-01-29 深圳追一科技有限公司 Emotion recognition method and device, electronic equipment and storage medium
CN112463108B (en) * 2020-12-14 2023-03-31 美的集团股份有限公司 Voice interaction processing method and device, electronic equipment and storage medium
CN112633172B (en) * 2020-12-23 2023-11-14 平安银行股份有限公司 Communication optimization method, device, equipment and medium
CN113268667B (en) * 2021-05-28 2022-08-16 汕头大学 Chinese comment emotion guidance-based sequence recommendation method and system
CN113852524A (en) * 2021-07-16 2021-12-28 天翼智慧家庭科技有限公司 Intelligent household equipment control system and method based on emotional characteristic fusion
WO2023013927A1 (en) 2021-08-05 2023-02-09 Samsung Electronics Co., Ltd. Method and wearable device for enhancing quality of experience index for user in iot network
US11977358B2 (en) * 2021-08-17 2024-05-07 Robin H. Stewart Systems and methods for dynamic biometric control of IoT devices
CN113870902B (en) * 2021-10-27 2023-03-14 安康汇智趣玩具科技技术有限公司 Emotion recognition system, device and method for voice interaction plush toy
CN115001890B (en) * 2022-05-31 2023-10-31 四川虹美智能科技有限公司 Intelligent household appliance control method and device based on response-free
CN115063874B (en) * 2022-08-16 2023-01-06 深圳市海清视讯科技有限公司 Control method, device and equipment of intelligent household equipment and storage medium
CN115662440B (en) * 2022-12-27 2023-05-23 广州佰锐网络科技有限公司 Voiceprint feature recognition method and system based on machine learning
CN116909159A (en) * 2023-01-17 2023-10-20 广东维锐科技股份有限公司 Intelligent home control system and method based on mood index
CN115862675B (en) * 2023-02-10 2023-05-05 之江实验室 Emotion recognition method, device, equipment and storage medium
CN117198338B (en) * 2023-11-07 2024-01-26 中瑞科技术有限公司 Interphone voiceprint recognition method and system based on artificial intelligence

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1282113B1 (en) * 2001-08-02 2005-01-12 Sony International (Europe) GmbH Method for detecting emotions from speech using speaker identification
JP4204839B2 (en) * 2002-10-04 2009-01-07 株式会社エイ・ジー・アイ Idea model device, spontaneous emotion model device, idea simulation method, spontaneous emotion simulation method, and program
DE60320414T2 (en) * 2003-11-12 2009-05-20 Sony Deutschland Gmbh Apparatus and method for the automatic extraction of important events in audio signals
JP4478939B2 (en) * 2004-09-30 2010-06-09 株式会社国際電気通信基礎技術研究所 Audio processing apparatus and computer program therefor
US8214214B2 (en) * 2004-12-03 2012-07-03 Phoenix Solutions, Inc. Emotion detection device and method for use in distributed systems
US8209182B2 (en) * 2005-11-30 2012-06-26 University Of Southern California Emotion recognition system
US7983910B2 (en) * 2006-03-03 2011-07-19 International Business Machines Corporation Communicating across voice and text channels with emotion preservation
JP4085130B2 (en) * 2006-06-23 2008-05-14 松下電器産業株式会社 Emotion recognition device
US8195460B2 (en) * 2008-06-17 2012-06-05 Voicesense Ltd. Speaker characterization through speech analysis
KR20100054946A (en) * 2008-11-17 2010-05-26 엘지전자 주식회사 Washing machine and using method for the same
CN101604204B (en) * 2009-07-09 2011-01-05 北京科技大学 Distributed cognitive technology for intelligent emotional robot
US8380520B2 (en) * 2009-07-30 2013-02-19 Industrial Technology Research Institute Food processor with recognition ability of emotion-related information and emotional signals
KR101299220B1 (en) * 2010-01-08 2013-08-22 한국전자통신연구원 Method for emotional communication between emotional signal sensing device and emotional service providing device
DE102010006927B4 (en) * 2010-02-04 2021-05-27 Sennheiser Electronic Gmbh & Co. Kg Headset and handset
US8903176B2 (en) * 2011-11-14 2014-12-02 Sensory Logic, Inc. Systems and methods using observed emotional data
CN102625005A (en) 2012-03-05 2012-08-01 广东天波信息技术股份有限公司 Call center system with function of real-timely monitoring service quality and implement method of call center system
CN102855872B (en) * 2012-09-07 2015-08-05 深圳市信利康电子有限公司 Based on terminal and the mutual household electric appliance control method of internet voice and system
CN103093755B (en) * 2012-09-07 2016-05-11 深圳市信利康电子有限公司 Based on terminal and mutual network household electric appliance control method and the system of internet voice
CN102855874B (en) * 2012-09-07 2015-01-21 深圳市信利康电子有限公司 Method and system for controlling household appliance on basis of voice interaction of internet
CN102831892B (en) * 2012-09-07 2014-10-22 深圳市信利康电子有限公司 Toy control method and system based on internet voice interaction
US9020822B2 (en) * 2012-10-19 2015-04-28 Sony Computer Entertainment Inc. Emotion recognition using auditory attention cues extracted from users voice
US9047871B2 (en) * 2012-12-12 2015-06-02 At&T Intellectual Property I, L.P. Real—time emotion tracking system
TWI489451B (en) * 2012-12-13 2015-06-21 Univ Nat Chiao Tung Music playing system and method based on speech emotion recognition
CN103456314B (en) * 2013-09-03 2016-02-17 广州创维平面显示科技有限公司 A kind of emotion identification method and device
EP3049961A4 (en) 2013-09-25 2017-03-22 Intel Corporation Improving natural language interactions using emotional modulation
KR101531664B1 (en) * 2013-09-27 2015-06-25 고려대학교 산학협력단 Emotion recognition ability test system using multi-sensory information, emotion recognition training system using multi- sensory information
EP2874110A1 (en) * 2013-11-15 2015-05-20 Telefonica Digital España, S.L.U. A method and a system to obtain data from voice analysis in a communication and computer programs products thereof
US10127927B2 (en) * 2014-07-28 2018-11-13 Sony Interactive Entertainment Inc. Emotional speech processing
CN104409075B (en) * 2014-11-28 2018-09-04 深圳创维-Rgb电子有限公司 Audio recognition method and system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110890088A (en) * 2019-10-12 2020-03-17 中国平安财产保险股份有限公司 Voice information feedback method and device, computer equipment and storage medium
CN110890088B (en) * 2019-10-12 2022-07-15 中国平安财产保险股份有限公司 Voice information feedback method and device, computer equipment and storage medium
CN113113047A (en) * 2021-03-17 2021-07-13 北京大米科技有限公司 Audio processing method and device, readable storage medium and electronic equipment

Also Published As

Publication number Publication date
CN105334743A (en) 2016-02-17
US10013977B2 (en) 2018-07-03
WO2017084197A1 (en) 2017-05-26
CN105334743B (en) 2018-10-26
US20170270922A1 (en) 2017-09-21

Similar Documents

Publication Publication Date Title
US10013977B2 (en) Smart home control method based on emotion recognition and the system thereof
CN107818798B (en) Customer service quality evaluation method, device, equipment and storage medium
CN108962255B (en) Emotion recognition method, emotion recognition device, server and storage medium for voice conversation
CN108399923B (en) More human hairs call the turn spokesman's recognition methods and device
US10068588B2 (en) Real-time emotion recognition from audio signals
US11854550B2 (en) Determining input for speech processing engine
WO2018108080A1 (en) Voiceprint search-based information recommendation method and device
Aloufi et al. Emotionless: Privacy-preserving speech analysis for voice assistants
US20210366459A1 (en) Hotword-Aware Speech Synthesis
Kamaruddin et al. Cultural dependency analysis for understanding speech emotion
WO2016150001A1 (en) Speech recognition method, device and computer storage medium
US10699706B1 (en) Systems and methods for device communications
US9691389B2 (en) Spoken word generation method and system for speech recognition and computer readable medium thereof
CN110634472A (en) Voice recognition method, server and computer readable storage medium
US11462219B2 (en) Voice filtering other speakers from calls and audio messages
WO2023184942A1 (en) Voice interaction method and apparatus and electric appliance
Baird et al. Emotion recognition in public speaking scenarios utilising an lstm-rnn approach with attention
Park et al. Towards understanding speaker discrimination abilities in humans and machines for text-independent short utterances of different speech styles
US20220059080A1 (en) Realistic artificial intelligence-based voice assistant system using relationship setting
KR20170086233A (en) Method for incremental training of acoustic and language model using life speech and image logs
CN109074809B (en) Information processing apparatus, information processing method, and computer-readable storage medium
US10978069B1 (en) Word selection for natural language interface
Bojanić et al. Application of dimensional emotion model in automatic emotional speech recognition
Akagi et al. Emotional speech recognition and synthesis in multiple languages toward affective speech-to-speech translation system
JP7485858B2 (en) Speech individuation and association training using real-world noise

Legal Events

Date Code Title Description
MK5 Application lapsed section 142(2)(e) - patent request and compl. specification not accepted