WO2021134250A1 - Procédé et dispositif de gestion d'émotion et support de stockage lisible par ordinateur - Google Patents
Procédé et dispositif de gestion d'émotion et support de stockage lisible par ordinateur Download PDFInfo
- Publication number
- WO2021134250A1 WO2021134250A1 PCT/CN2019/130021 CN2019130021W WO2021134250A1 WO 2021134250 A1 WO2021134250 A1 WO 2021134250A1 CN 2019130021 W CN2019130021 W CN 2019130021W WO 2021134250 A1 WO2021134250 A1 WO 2021134250A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- emotion
- user
- result
- feature data
- emotional
- Prior art date
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 427
- 238000007726 management method Methods 0.000 title claims abstract description 180
- 238000012545 processing Methods 0.000 claims abstract description 37
- 238000000034 method Methods 0.000 claims abstract description 36
- 230000002996 emotional effect Effects 0.000 claims description 120
- 230000006870 function Effects 0.000 abstract description 14
- 230000001815 facial effect Effects 0.000 description 25
- 230000008569 process Effects 0.000 description 21
- 230000008859 change Effects 0.000 description 11
- 206010027940 Mood altered Diseases 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 230000008921 facial expression Effects 0.000 description 5
- 230000036651 mood Effects 0.000 description 5
- 230000007510 mood change Effects 0.000 description 5
- 230000006399 behavior Effects 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 4
- 239000000284 extract Substances 0.000 description 3
- 230000033001 locomotion Effects 0.000 description 3
- 208000020016 psychiatric disease Diseases 0.000 description 3
- 230000036760 body temperature Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003340 mental effect Effects 0.000 description 2
- 230000001105 regulatory effect Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 206010010144 Completed suicide Diseases 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 230000008909 emotion recognition Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000009527 percussion Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/70—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/63—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
Definitions
- This application relates to the field of communication technology, and in particular to emotional management methods, devices, and computer-readable storage media.
- the management of user emotions generally uses a camera to take pictures to obtain the user's facial information, and then uses image recognition to identify the user's facial information from the above-mentioned user's avatar, so as to know the user's emotions, but this recognition method
- the accuracy of facial expression recognition is low, which in turn leads to inaccurate judgment of user emotions and poor user experience.
- This user emotion management method cannot achieve the effect of managing and regulating user emotions.
- the main purpose of this application is to propose an emotion management method, device, and computer-readable storage medium, aiming to solve the technical problem that user emotions are not easy to find and user emotions are not easy to manage and adjust.
- an emotion management method which includes the following steps:
- Emotion management is performed on the user according to the user emotion result and preset rules.
- the present application also provides a device, the device including: a memory, a processor, and an emotion management program stored on the memory and capable of running on the processor, the emotion management program When executed by the processor, the steps of the emotion management method as described above are realized.
- the present application also provides a computer-readable storage medium having an emotion management program stored on the computer-readable storage medium, and when the emotion management program is executed by a processor, the emotion management as described above is realized. Method steps.
- This application provides an emotion management method, system, and computer-readable storage medium to obtain user voice feature data and physical feature data; process the voice feature data and the physical feature data to determine the user's emotional result; The user emotion results and preset rules perform emotion management on the user.
- the present application can realize the function of discovering the user's mood change, and realize the function of managing and adjusting the user's mood.
- FIG. 1 is a schematic diagram of a terminal structure of a hardware operating environment involved in a solution of an embodiment of the present application
- FIG. 3 is a schematic flowchart of a second embodiment of the emotion management method of this application.
- FIG. 4 is a schematic flowchart of a third embodiment of the emotion management method of this application.
- FIG. 5 is a schematic flowchart of a fourth embodiment of the emotion management method of this application.
- the main solution of the embodiment of this application is: obtain the user's voice feature data and physical feature data; process the voice feature data and the physical feature data to determine the user's emotional result; according to the user's emotional result and preset The rules perform emotional management on the user.
- the existing management methods of user emotions generally use a camera to take pictures to obtain the user's facial information, and then use image recognition to recognize the user's facial information from the above-mentioned user's avatar, thereby knowing the user's emotions, but this recognition
- the method has low accuracy in facial expression recognition, which in turn leads to inaccurate judgment of user emotions and poor user experience.
- This user emotion management method cannot achieve the effect of managing and regulating user emotions.
- This application aims to solve the technical problem that user emotions are not easy to find, and user emotions are not easy to manage and adjust.
- FIG. 1 is a schematic diagram of a terminal structure of a hardware operating environment involved in a solution of an embodiment of the present application.
- the terminal in the embodiment of the present application may be a PC, or a mobile terminal device with a display function, such as a smart phone or a tablet computer.
- the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, and a communication bus 1002.
- the communication bus 1002 is used to implement connection and communication between these components.
- the user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
- the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
- the memory 1005 may be a high-speed RAM memory, or a stable memory (non-volatile memory), such as a magnetic disk memory.
- the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
- the terminal may also include a camera, RF (Radio Frequency (radio frequency) circuits, sensors, audio circuits, WiFi modules, etc.
- sensors such as light sensors, motion sensors and other sensors.
- the light sensor may include an ambient light sensor and a proximity sensor.
- the ambient light sensor can adjust the brightness of the display screen according to the brightness of the ambient light
- the proximity sensor can turn off the display screen and/or when the mobile terminal is moved to the ear.
- Backlight As a kind of motion sensor, the gravity acceleration sensor can detect the magnitude of acceleration in various directions (usually three axes). It can detect the magnitude and direction of gravity when it is stationary.
- the mobile terminal can be used for applications that recognize the posture of the mobile terminal (such as horizontal and vertical screen switching, Related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, percussion), etc.; of course, the mobile terminal can also be equipped with other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, etc. No longer.
- terminal structure shown in FIG. 1 does not constitute a limitation on the terminal, and may include more or less components than shown in the figure, or combine some components, or arrange different components.
- the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and an emotion management program.
- the network interface 1004 is mainly used to connect to the back-end server and communicate with the back-end server;
- the user interface 1003 is mainly used to connect to the client (user side) and communicate with the client;
- the processor 1001 can be used to call the emotion management program stored in the memory 1005 and perform the following operations:
- Acquire voice feature data and physical feature data of the user process the voice feature data and the physical feature data to determine the user's emotional result; perform emotional management on the user according to the user's emotional result and preset rules.
- processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
- Emotion management is performed on the user according to the user emotion result and preset music rules.
- processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
- processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
- Emotion management is performed on the user according to the emotional intervention information.
- processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
- the user's voice emotion result is verified according to the user's physical emotion result, and the user's emotion result is determined.
- processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
- the step of performing emotion management on the user according to the user emotion result and preset rules includes:
- Emotion management of the user is performed according to the user emotional result, the interlocutor's emotional result and preset rules.
- processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
- processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
- Denoising processing is performed on the voice feature data and the body feature data.
- FIG. 2 is a schematic flowchart of the first embodiment of the emotion management method of this application.
- the emotion management method is applied to an emotion management device, and the emotion management method includes:
- Step S10 Acquire voice feature data and body feature data of the user
- the emotion management device actively acquires the user's voice characteristics at preset time intervals Data and physical characteristics data.
- the emotion management device may be a wearable device used by the user, such as smart glasses, smart bracelet or wireless earphones, etc.; the emotion management device may be provided with a microphone for acquiring user voice characteristic data; the emotion management device may also be provided with a human sensor, The human body sensor is used to obtain the user's brain waves, skin conductivity, and heart rate; the emotion management device may also be provided with an acceleration sensor for obtaining whether the user's body is in a weightless state; the emotion management device may also be set with a temperature for obtaining the user's body temperature Sensor; Emotion management equipment can also be a mobile terminal; Emotion management equipment can also be a mobile terminal; Emotion management equipment can also be a device used by an emotion recognition agency to recognize user emotions; Among them, the voice feature data
- Step S10 after acquiring the user's voice feature data and body feature data, may include:
- Step a Perform denoising processing on the voice feature data and body feature data.
- the emotion management device calculates optimized parameters of the voice feature data and the physical feature data, and the optimized parameters include directivity parameters and gain parameters.
- Step S20 processing the voice feature data and the physical feature data to determine the emotional result of the user
- the emotion management device processes the user's voice feature data and physical feature data to obtain the user's voice and emotion results.
- the user emotion result is obtained after processing and calculating the user's voice feature data and the user's physical feature data.
- Step S30 Perform emotional management on the user according to the user emotional result and preset rules.
- the emotion management device after the emotion management device obtains the user emotion result, the emotion management device performs emotion management on the user according to the user emotion result and preset rules.
- Step S30 performing emotional management on the user according to the user emotional result and preset rules may include:
- Step b Perform emotional management on the user according to the user emotional result and preset music rules.
- the emotion management device after the emotion management device obtains the user emotion result, the emotion management device performs emotion management on the user according to the user emotion result and preset music rules.
- the preset music rule can be a rule that adjusts the user's emotions through music according to the obtained user emotional results. For example, when the user's emotional result is angry, play a beautiful piece of music to the user, or play the user's emotions. The volume of the music is adjusted to achieve the rules that ease the user's emotions.
- Step S30 performing emotional management on the user according to the user emotional result and preset rules may include:
- Step c Perform emotional management on the user according to the user emotional result and preset coaching rules.
- the emotion management device after the emotion management device obtains the user emotion result, the emotion management device performs emotion management on the user according to the user emotion result and preset coaching rules.
- the preset coaching rule can be based on the obtained user emotion results, the emotion management device recommends the coach role corresponding to the user emotion results to the user, and broadcasts a paragraph of language or a paragraph of language story to the user for the attention and environment of professional users. The user's emotions.
- Step S30 performing emotional management on the user according to the user emotional result and preset rules may include:
- Step d1 when the user emotion result is a preset emotion result, sending the user emotion result to an emotion management agency;
- Step d2 receiving emotional intervention information returned by the emotional management agency according to the user emotional result
- Step d3 Perform emotional management on the user according to the emotional intervention information.
- the emotion management structure sends the user emotion management result to the emotion management organization, so that the emotion management Institutions to judge and provide opinions on emotional management; after the emotional management agency receives the user emotional results sent by the emotional management device, the emotional management agency processes and judges the user emotional results, and obtains emotional intervention information that is intervened by the emotional management device.
- the emotional management agency sends the emotional intervention information to the emotional management device, and the emotional management device receives the emotional intervention information returned by the emotional management agency and the user’s emotional results. After the emotional management device receives the emotional intervention information, the emotional management device responds to the user based on the emotional intervention information.
- the emotional intervention information can be information that stimulates the user’s body to regulate the user’s emotions when the user’s emotional results are more serious, and the emotional intervention information can also be when the emotional results are more serious.
- This embodiment obtains the user’s voice feature data and physical feature data through the above-mentioned solution; processes the voice feature data and the physical feature data to determine the user’s emotional result; and compares all users according to the user’s emotional result and preset rules.
- the user performs emotional management. As a result, the function of discovering the user's mood changes in time is realized, and the function of managing and adjusting the user's mood is realized.
- FIG. 3 is a schematic flowchart of a second embodiment of the emotion management method of this application.
- the step S20 to process the voice feature data and the physical feature data to determine the emotional result of the user may include:
- Step S21 processing the voice feature data to obtain a voice emotion result of the user
- the emotion management device processes the voice feature data of the user to obtain the voice emotion result of the user.
- the user's voice emotion result is obtained after processing and calculating the user's voice feature data.
- Step e1 Recognizing keyword information and intonation information included in the voice feature data
- the emotion management device extracts keyword information and intonation information from the voice feature data, where the intonation information includes at least one of the volume, speech rate, pitch, and respective change trends of the voice data.
- the word segmentation database can be used to remove meaningless words in the semantic content, and at the same time extract keyword information that can indicate the user's emotion; for the recognized intonation, filter out the intonation information that meets the preset conditions, exemplary, The volume exceeds the maximum preset threshold and the minimum preset threshold is filtered out as a target intonation, or the speech rate exceeds a certain preset threshold is also used as intonation information.
- Step e2 Generate a voice emotion model according to the keyword information and the intonation information, and match the voice emotion model with the voice standard emotion model in the emotion model library to generate a user voice emotion result.
- the emotion management device generates a voice emotion model according to the keyword information and intonation information, and matches the voice emotion model with the voice standard emotion model in the emotion model library to generate a user voice emotion result.
- Step e2 generating a voice emotion model based on the keyword information and the intonation information, and matching the voice emotion model with the voice standard emotion model in the emotion model library to generate a user voice emotion result, which may include:
- Step e21 Determine voice feature points according to the keyword information and the intonation information
- the recognized keyword information and intonation information are further analyzed and screened, and keywords and intonations that can clearly indicate the user's emotions are determined as voice feature points.
- the voice feature points include keyword feature points and intonation. Feature points.
- the keyword information can be screened through the emotionally sensitive word database established in advance, and the filtered keyword information can be determined as the keyword feature points.
- the emotionally sensitive word database includes the user's various emotions. Frequently said vocabulary. Since the intonation information is usually displayed in the form of a waveform graph, the points with a more obvious change trend can be used as the characteristic points of intonation, such as the point where the speech speed suddenly increases.
- Step e22 Generate a voice emotion model according to the voice feature points, and calibrate the voice feature points in the voice emotion model;
- a voice emotion model is generated according to the determined voice feature points, so as to analyze user emotions according to the voice emotion model.
- the voice feature points are calibrated on the voice emotion model, where the voice feature points can be the more prominent part of the voice feature points determined in, thereby realizing further screening of the user's emotional characteristics, making the user's emotional characteristics more obvious .
- Step e23 Match the voice emotion model with the voice standard emotion model in the emotion model library to adjust the calibrated voice feature points on the voice emotion model, and record the voice feature change data of the voice feature points ;
- the emotion management device matches the voice emotion model with the voice standard emotion model in the emotion model library to adjust the calibrated voice feature points on the voice emotion model for fine-tuning, and record the voice features of the voice feature points Change data.
- the voice standard emotion model can be established based on the user's daily voice data and the expression corresponding to the daily voice data.
- Step e22 Match the voice feature change data with the intonation feature data and the psychological behavior feature data in the emotion model library, and generate a user voice emotion result according to the matching result.
- the user's emotion or emotion change data is output.
- Step S22 processing the physical feature data to obtain a result of the user's physical emotions
- the emotion management device processes the physical feature data of the user to obtain the result of the user's physical emotion.
- the result of the user's physical emotion is obtained after processing and calculating the user's physical feature data.
- Step S23 verifying the user's voice emotion result according to the user's physical emotion result, and determining the user's emotion result;
- the emotion management device compares the user's physical emotion result and the user's voice emotion result respectively at the same time point. If the user's physical emotion result at a time point is different from the user voice emotion result, the time point is deleted If the user’s physical emotion result at a time point is the same as the user’s voice emotion result, the user’s voice emotion result at that time point will be retained; when the emotion management device compares the user’s voice emotion result within the preset time interval to After the physical feature time information is all compared one by one, the user voice emotion results after the user voice emotion results with different comparison results are deleted, and the user voice emotion results are searched.
- This embodiment obtains the user’s voice feature data and physical feature data through the above solution; processes the voice feature data to obtain the user’s voice emotion results; processes the physical feature data to obtain the user’s physical emotion results;
- the result of the user's body emotion verifies the result of the user's voice emotion, and determines the result of the user's emotion; and the emotion management of the user is performed according to the result of the user's emotion and preset rules.
- FIG. 4 is a schematic flowchart of a third embodiment of the emotion management method of this application.
- it may include:
- Step S40 Obtain emotion-related data of the interlocutor
- the emotion management device can obtain the emotion-related data of the interlocutor with the user; wherein the emotion-related data may be the interlocutor's voice data, which can be It is the face data of the conversation person, and it can also be the special diagnosis data of the conversation person's body; among them, the emotion management device can be equipped with a camera for obtaining the face data of the conversation person; among them, the voice feature data of the conversation person's face data is through the emotion management device The voice data of the user's speech collected by the microphone of the microphone or the microphone of other collection devices; wherein the voice feature data of the face data of the conversation person is the face data of the conversation person collected through the camera of the emotion management device or the head of the other collecting device.
- the emotion-related data may be the interlocutor's voice data, which can be It is the face data of the conversation person, and it can also be the special diagnosis data of the conversation person's body; among them, the emotion management device can be equipped with a camera for obtaining the face data of the
- Step S50 processing the emotion-related data to obtain the emotion result of the interlocutor
- the emotion management device processes the emotion-related data to obtain the emotion result of the interlocutor.
- the emotion result of the interlocutor is obtained after processing and calculating the emotion-related data of the interlocutor.
- Step S40 obtains the emotion-related data of the interlocutor, which may include:
- Step f1 Obtain the interlocutor's voice data and interlocutor's face data of the interlocutor;
- the emotion management device after the emotion management device obtains the voice feature data and physical feature data of the user, the emotion management device can obtain the voice data of the conversation person and the face data of the conversation person with the user.
- Step S50 the processing the emotion-related data to obtain the result of determining the emotion of the interlocutor may include:
- Step g1 processing the voice data of the interlocutor to obtain the result of the interlocutor's voice emotion
- the emotion management device processes the interlocutor's voice data of the interlocutor to obtain the result of the interlocutor's voice emotion.
- the voice emotion result of the interlocutor is obtained after processing and calculating the voice data of the interlocutor.
- Step g2 processing the face data of the conversation person to obtain the face emotion result of the conversation person;
- the emotion management device processes the face data of the conversation person to obtain the face emotion result of the conversation person.
- the face emotion result of the conversation person is obtained after processing and calculating the face data of the conversation person.
- Step g2 processes the face data of the conversation person to obtain the emotion result of the conversation person's face, which may include:
- Step g21 Recognizing the face image information of the conversation person included in the face data of the conversation person;
- the emotion management device extracts the face image information of the conversation person from the face data of the conversation person; wherein the face image information of the conversation person may be used for image information representing the expression of the conversation person, for example, representing the conversation person A happy image, an image that expresses the sadness of the interlocutor, and an image that expresses the anger of the interlocutor; the emotion management device can be used to remove images without user facial expressions in the facial image information of the interlocutor or the facial expressions of the interlocutor due to the rapid rotation or movement of the interlocutor Unclear face image.
- Step g22 Generate a face emotion model of the conversation person according to the face image information of the conversation person, and match the face emotion model of the conversation person with the standard emotion model of the face in the emotion model library to generate a face emotion result of the conversation person.
- the emotion management device generates a dialogue face emotion model based on the face image information of the dialogue person, and matches the dialogue face emotion model with the standard emotion model of the face in the emotion model library to generate the dialogue face emotion result.
- Step g22 generates a face emotion model of the dialogue person according to the face image information of the dialogue person, and matches the face emotion model of the dialogue person with the standard emotion model of the face in the emotion model library to generate a face emotion result of the dialogue person.
- Step g221 Determine the facial emotion feature points of the conversation person according to the face image information of the conversation person;
- the emotion management device further analyzes and filters the recognized face image information of the conversation person to determine the face image of the conversation person that can clearly show the expression of the conversation person, that is, determine the face emotion characteristics of the conversation person point.
- Step g222 generating a facial emotion model of the dialogue person according to the facial emotion feature points of the dialogue person, and marking the facial emotion feature points of the dialogue person on the facial emotion model of the dialogue person;
- the emotion management device generates a facial emotion model of the dialogue person according to the determined facial emotion feature points of the dialogue person, so as to analyze the emotion of the dialogue person according to the facial emotion model of the dialogue person.
- the face emotion feature points of the dialogue person are calibrated on the face emotion model of the dialogue person. Among them, the face emotion feature points of the dialogue person can be a more prominent part of the facial emotion feature points determined in The further screening of emotional characteristics makes the emotional characteristics of the interlocutor more obvious.
- Step g223 Match the face emotion model of the dialog person with the standard emotion model of the face in the emotion model library to adjust the calibrated emotional feature points of the face of the dialog person on the emotion model of the dialog person, and record all The face change data of the dialogue person describing the emotional feature points of the dialogue person's face;
- the emotion management device will match the facial emotion model of the dialogue person with the standard emotion model of the face in the emotion model library to adjust the calibrated facial emotion feature points of the dialogue person’s facial emotion model. And record the face feature change data of the dialogue person's facial emotion feature points.
- Step g224 matches the face feature change data of the conversation person with the facial expression feature data in the emotion model library and the psychological behavior feature data, and generates a face emotion result of the conversation person according to the matching result.
- the emotion management device outputs the facial emotion result of the dialogue person according to the matching result of the dialogue person's facial feature change data of the dialogue person's facial feature points and the expression feature data and mental behavior characteristic data in the emotion model library.
- Step g3 verifying the voice emotion result of the interlocutor according to the facial emotion result of the interlocutor, and determining the emotion result of the interlocutor.
- the emotion management device compares the facial emotion result of the conversation person and the speech emotion result of the conversation person at the same time point. If the facial emotion result of the conversation person at a time point is different from the voice emotion result of the conversation person , Delete the conversational person’s voice emotion result at that time point; if the conversational person’s facial emotion result at a time point is the same as the conversation person’s voice emotion result, the conversational person’s voice emotion result at that time point will be retained; The voice emotion results of the interlocutor within the preset time interval and the physical feature time information are all compared one by one, the interlocutor’s voice emotion results after the different interlocutor’s voice emotion results will be deleted and the interlocutor’s voice emotion results will be searched. .
- step S30 performing emotional management on the user according to the user emotional result and preset rules may include:
- Step S31 Perform emotional management on the user according to the emotional result of the user, the emotional result of the interlocutor and preset rules.
- the emotion management device after the emotion management device obtains the emotion result of the conversation person and the emotion result of the user, the emotion management device performs emotion management on the user according to the emotion result of the user, the emotion result of the conversation person, and preset rules.
- This embodiment obtains the user’s voice feature data and physical feature data through the above solution; obtains the emotion-related data of the conversation person; processes the emotion-related data to obtain the emotion result of the conversation person; compares the voice feature data and the The physical feature data is processed to determine the emotional result of the user; the emotional management of the user is performed according to the emotional result of the user, the emotional result of the interlocutor and preset rules.
- the function of discovering the user's mood changes in time is realized, and the function of managing and adjusting the user's mood is realized.
- FIG. 5 is a schematic flowchart of a fourth embodiment of the emotion management method of this application. Based on the embodiment shown in FIG. 2 above, after obtaining the user's voice feature data and body feature data in step S20, it may further include:
- Step S24 sending the voice feature data and the body feature data to a server
- Step S25 Receive a user emotional result returned by the server according to the voice feature data and the physical feature data.
- the emotion management device can send the user's voice feature data and physical feature data to the cloud server for processing; so that the cloud server can receive the voice
- the cloud server processes the voice feature data and physical feature data.
- the cloud server obtains the user's emotional results based on the voice feature data and physical feature data, and the cloud server sends the obtained user emotional results to the emotional management device.
- the emotion management device receives the user emotion result returned by the cloud server according to the voice feature data and the physical feature data.
- This embodiment obtains the voice feature data and physical feature data of the user through the above-mentioned solution; sends the voice feature data and the physical feature data to the server; receives the server according to the voice feature data and the physical feature data Returned user emotional results; emotional management of the user according to the user emotional results and preset rules.
- the function of discovering the user's mood changes in time is realized, and the function of managing and adjusting the user's mood is realized.
- the application also provides an emotion management device.
- the emotion management device of the present application includes: a memory, a processor, and an emotion management program stored on the memory and capable of running on the processor, and the emotion management program is executed by the processor to realize the emotions described above Steps of the management method.
- the application also provides a computer-readable storage medium.
- An emotion management program is stored on the computer-readable storage medium of the present application, and when the emotion management program is executed by a processor, the steps of the above-mentioned emotion management method are realized.
- the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disks, optical disks), including several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present application.
- a terminal device which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Child & Adolescent Psychology (AREA)
- Psychiatry (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Hospice & Palliative Care (AREA)
- General Physics & Mathematics (AREA)
- Medical Informatics (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Epidemiology (AREA)
- Developmental Disabilities (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Acoustics & Sound (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
La présente invention porte sur un procédé de gestion d'émotion, sur un dispositif de gestion d'émotion et sur un support de stockage lisible par ordinateur. Le procédé consiste : à acquérir des données de caractéristiques vocales et des données de caractéristiques corporelles d'un utilisateur (S10); à traiter les données de caractéristiques vocales et les données de caractéristiques corporelles et à déterminer un résultat d'émotion d'utilisateur (S20); et à réaliser une gestion d'émotion sur l'utilisateur en fonction du résultat d'émotion d'utilisateur et d'une règle prédéfinie (S30). Le procédé peut réaliser la fonction de découverte de changements d'émotion d'utilisateur et peut réaliser les fonctions de gestion et de réglage d'émotions d'utilisateur.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201980003396.6A CN111149172B (zh) | 2019-12-30 | 2019-12-30 | 情绪管理方法、设备及计算机可读存储介质 |
PCT/CN2019/130021 WO2021134250A1 (fr) | 2019-12-30 | 2019-12-30 | Procédé et dispositif de gestion d'émotion et support de stockage lisible par ordinateur |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2019/130021 WO2021134250A1 (fr) | 2019-12-30 | 2019-12-30 | Procédé et dispositif de gestion d'émotion et support de stockage lisible par ordinateur |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021134250A1 true WO2021134250A1 (fr) | 2021-07-08 |
Family
ID=70525128
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/130021 WO2021134250A1 (fr) | 2019-12-30 | 2019-12-30 | Procédé et dispositif de gestion d'émotion et support de stockage lisible par ordinateur |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111149172B (fr) |
WO (1) | WO2021134250A1 (fr) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111724880A (zh) * | 2020-06-09 | 2020-09-29 | 百度在线网络技术(北京)有限公司 | 用户情绪调节方法、装置、设备和可读存储介质 |
CN112398952A (zh) * | 2020-12-09 | 2021-02-23 | 英华达(上海)科技有限公司 | 电子资源推送方法、系统、设备及存储介质 |
CN112464018A (zh) * | 2020-12-10 | 2021-03-09 | 山西慧虎健康科技有限公司 | 一种智能情绪识别调节方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104735234A (zh) * | 2013-12-21 | 2015-06-24 | 陕西荣基实业有限公司 | 可测量心情的电话 |
CN206946938U (zh) * | 2017-01-13 | 2018-01-30 | 深圳大森智能科技有限公司 | 智能机器人主动服务系统 |
CN108305640A (zh) * | 2017-01-13 | 2018-07-20 | 深圳大森智能科技有限公司 | 智能机器人主动服务方法与装置 |
US20180300468A1 (en) * | 2016-08-15 | 2018-10-18 | Goertek Inc. | User registration method and device for smart robots |
CN109803572A (zh) * | 2016-07-27 | 2019-05-24 | 生物说股份有限公司 | 用于测量和管理生理情绪状态的系统和方法 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6943794B2 (en) * | 2000-06-13 | 2005-09-13 | Minolta Co., Ltd. | Communication system and communication method using animation and server as well as terminal device used therefor |
CN103829958B (zh) * | 2014-02-19 | 2016-11-09 | 广东小天才科技有限公司 | 一种监测人情绪的方法及装置 |
CN206470693U (zh) * | 2017-01-24 | 2017-09-05 | 广州幻境科技有限公司 | 一种基于可穿戴设备的情绪识别系统 |
CN107343095B (zh) * | 2017-06-30 | 2020-10-09 | Oppo广东移动通信有限公司 | 通话音量控制方法、装置、存储介质及终端 |
CN108742516B (zh) * | 2018-03-26 | 2021-03-26 | 浙江广厦建设职业技术学院 | 智能家居的情绪测调系统及方法 |
-
2019
- 2019-12-30 CN CN201980003396.6A patent/CN111149172B/zh active Active
- 2019-12-30 WO PCT/CN2019/130021 patent/WO2021134250A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104735234A (zh) * | 2013-12-21 | 2015-06-24 | 陕西荣基实业有限公司 | 可测量心情的电话 |
CN109803572A (zh) * | 2016-07-27 | 2019-05-24 | 生物说股份有限公司 | 用于测量和管理生理情绪状态的系统和方法 |
US20180300468A1 (en) * | 2016-08-15 | 2018-10-18 | Goertek Inc. | User registration method and device for smart robots |
CN206946938U (zh) * | 2017-01-13 | 2018-01-30 | 深圳大森智能科技有限公司 | 智能机器人主动服务系统 |
CN108305640A (zh) * | 2017-01-13 | 2018-07-20 | 深圳大森智能科技有限公司 | 智能机器人主动服务方法与装置 |
Also Published As
Publication number | Publication date |
---|---|
CN111149172B (zh) | 2021-05-11 |
CN111149172A (zh) | 2020-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110785735B (zh) | 用于语音命令情景的装置和方法 | |
WO2020135194A1 (fr) | Procédé d'interaction vocale basé sur la technologie de moteur d'émotion, terminal intelligent et support de stockage | |
CN111194465B (zh) | 音频活动追踪及概括 | |
US20180018300A1 (en) | System and method for visually presenting auditory information | |
US11330321B2 (en) | Method and device for adjusting video parameter based on voiceprint recognition and readable storage medium | |
CN110399837A (zh) | 用户情绪识别方法、装置以及计算机可读存储介质 | |
WO2021134250A1 (fr) | Procédé et dispositif de gestion d'émotion et support de stockage lisible par ordinateur | |
US10276151B2 (en) | Electronic apparatus and method for controlling the electronic apparatus | |
CN107097234A (zh) | 机器人控制系统 | |
US10757513B1 (en) | Adjustment method of hearing auxiliary device | |
CN112016367A (zh) | 一种情绪识别系统、方法及电子设备 | |
JP2012059107A (ja) | 感情推定装置、感情推定方法およびプログラム | |
WO2020215590A1 (fr) | Dispositif de prise de vues intelligent et son procédé de génération de scène basée sur une reconnaissance biométrique | |
RU2720359C1 (ru) | Способ и оборудование распознавания эмоций в речи | |
KR20200025532A (ko) | 음성 데이터 기반의 감정 인식 시스템 및 그 응용 방법 | |
KR20180081922A (ko) | 전자 장치의 입력 음성에 대한 응답 방법 및 그 전자 장치 | |
US20180240458A1 (en) | Wearable apparatus and method for vocabulary measurement and enrichment | |
JP2016062077A (ja) | 対話装置、対話システム、対話プログラム、サーバ、サーバの制御方法およびサーバ制御プログラム | |
JP2012230535A (ja) | 電子機器および電子機器の制御プログラム | |
TW202223804A (zh) | 電子資源推送方法及系統 | |
KR20200092207A (ko) | 전자 장치 및 이를 이용한 감정 정보에 대응하는 그래픽 오브젝트를 제공하는 방법 | |
WO2019235190A1 (fr) | Dispositif de traitement d'informations, procédé de traitement d'informations, programme et système de conversation | |
CN113764099A (zh) | 基于人工智能的心理状态分析方法、装置、设备及介质 | |
CN110111795B (zh) | 一种语音处理方法及终端设备 | |
CN112149599B (zh) | 表情追踪方法、装置、存储介质和电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19958150 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19958150 Country of ref document: EP Kind code of ref document: A1 |