WO2021134250A1 - Emotion management method and device, and computer-readable storage medium - Google Patents

Emotion management method and device, and computer-readable storage medium Download PDF

Info

Publication number
WO2021134250A1
WO2021134250A1 PCT/CN2019/130021 CN2019130021W WO2021134250A1 WO 2021134250 A1 WO2021134250 A1 WO 2021134250A1 CN 2019130021 W CN2019130021 W CN 2019130021W WO 2021134250 A1 WO2021134250 A1 WO 2021134250A1
Authority
WO
WIPO (PCT)
Prior art keywords
emotion
user
result
feature data
emotional
Prior art date
Application number
PCT/CN2019/130021
Other languages
French (fr)
Chinese (zh)
Inventor
肖岚
朱永胜
Original Assignee
深圳市易优斯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市易优斯科技有限公司 filed Critical 深圳市易优斯科技有限公司
Priority to PCT/CN2019/130021 priority Critical patent/WO2021134250A1/en
Priority to CN201980003396.6A priority patent/CN111149172B/en
Publication of WO2021134250A1 publication Critical patent/WO2021134250A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state

Definitions

  • This application relates to the field of communication technology, and in particular to emotional management methods, devices, and computer-readable storage media.
  • the management of user emotions generally uses a camera to take pictures to obtain the user's facial information, and then uses image recognition to identify the user's facial information from the above-mentioned user's avatar, so as to know the user's emotions, but this recognition method
  • the accuracy of facial expression recognition is low, which in turn leads to inaccurate judgment of user emotions and poor user experience.
  • This user emotion management method cannot achieve the effect of managing and regulating user emotions.
  • the main purpose of this application is to propose an emotion management method, device, and computer-readable storage medium, aiming to solve the technical problem that user emotions are not easy to find and user emotions are not easy to manage and adjust.
  • an emotion management method which includes the following steps:
  • Emotion management is performed on the user according to the user emotion result and preset rules.
  • the present application also provides a device, the device including: a memory, a processor, and an emotion management program stored on the memory and capable of running on the processor, the emotion management program When executed by the processor, the steps of the emotion management method as described above are realized.
  • the present application also provides a computer-readable storage medium having an emotion management program stored on the computer-readable storage medium, and when the emotion management program is executed by a processor, the emotion management as described above is realized. Method steps.
  • This application provides an emotion management method, system, and computer-readable storage medium to obtain user voice feature data and physical feature data; process the voice feature data and the physical feature data to determine the user's emotional result; The user emotion results and preset rules perform emotion management on the user.
  • the present application can realize the function of discovering the user's mood change, and realize the function of managing and adjusting the user's mood.
  • FIG. 1 is a schematic diagram of a terminal structure of a hardware operating environment involved in a solution of an embodiment of the present application
  • FIG. 3 is a schematic flowchart of a second embodiment of the emotion management method of this application.
  • FIG. 4 is a schematic flowchart of a third embodiment of the emotion management method of this application.
  • FIG. 5 is a schematic flowchart of a fourth embodiment of the emotion management method of this application.
  • the main solution of the embodiment of this application is: obtain the user's voice feature data and physical feature data; process the voice feature data and the physical feature data to determine the user's emotional result; according to the user's emotional result and preset The rules perform emotional management on the user.
  • the existing management methods of user emotions generally use a camera to take pictures to obtain the user's facial information, and then use image recognition to recognize the user's facial information from the above-mentioned user's avatar, thereby knowing the user's emotions, but this recognition
  • the method has low accuracy in facial expression recognition, which in turn leads to inaccurate judgment of user emotions and poor user experience.
  • This user emotion management method cannot achieve the effect of managing and regulating user emotions.
  • This application aims to solve the technical problem that user emotions are not easy to find, and user emotions are not easy to manage and adjust.
  • FIG. 1 is a schematic diagram of a terminal structure of a hardware operating environment involved in a solution of an embodiment of the present application.
  • the terminal in the embodiment of the present application may be a PC, or a mobile terminal device with a display function, such as a smart phone or a tablet computer.
  • the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, and a communication bus 1002.
  • the communication bus 1002 is used to implement connection and communication between these components.
  • the user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 may be a high-speed RAM memory, or a stable memory (non-volatile memory), such as a magnetic disk memory.
  • the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
  • the terminal may also include a camera, RF (Radio Frequency (radio frequency) circuits, sensors, audio circuits, WiFi modules, etc.
  • sensors such as light sensors, motion sensors and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor.
  • the ambient light sensor can adjust the brightness of the display screen according to the brightness of the ambient light
  • the proximity sensor can turn off the display screen and/or when the mobile terminal is moved to the ear.
  • Backlight As a kind of motion sensor, the gravity acceleration sensor can detect the magnitude of acceleration in various directions (usually three axes). It can detect the magnitude and direction of gravity when it is stationary.
  • the mobile terminal can be used for applications that recognize the posture of the mobile terminal (such as horizontal and vertical screen switching, Related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, percussion), etc.; of course, the mobile terminal can also be equipped with other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, etc. No longer.
  • terminal structure shown in FIG. 1 does not constitute a limitation on the terminal, and may include more or less components than shown in the figure, or combine some components, or arrange different components.
  • the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and an emotion management program.
  • the network interface 1004 is mainly used to connect to the back-end server and communicate with the back-end server;
  • the user interface 1003 is mainly used to connect to the client (user side) and communicate with the client;
  • the processor 1001 can be used to call the emotion management program stored in the memory 1005 and perform the following operations:
  • Acquire voice feature data and physical feature data of the user process the voice feature data and the physical feature data to determine the user's emotional result; perform emotional management on the user according to the user's emotional result and preset rules.
  • processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
  • Emotion management is performed on the user according to the user emotion result and preset music rules.
  • processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
  • processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
  • Emotion management is performed on the user according to the emotional intervention information.
  • processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
  • the user's voice emotion result is verified according to the user's physical emotion result, and the user's emotion result is determined.
  • processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
  • the step of performing emotion management on the user according to the user emotion result and preset rules includes:
  • Emotion management of the user is performed according to the user emotional result, the interlocutor's emotional result and preset rules.
  • processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
  • processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
  • Denoising processing is performed on the voice feature data and the body feature data.
  • FIG. 2 is a schematic flowchart of the first embodiment of the emotion management method of this application.
  • the emotion management method is applied to an emotion management device, and the emotion management method includes:
  • Step S10 Acquire voice feature data and body feature data of the user
  • the emotion management device actively acquires the user's voice characteristics at preset time intervals Data and physical characteristics data.
  • the emotion management device may be a wearable device used by the user, such as smart glasses, smart bracelet or wireless earphones, etc.; the emotion management device may be provided with a microphone for acquiring user voice characteristic data; the emotion management device may also be provided with a human sensor, The human body sensor is used to obtain the user's brain waves, skin conductivity, and heart rate; the emotion management device may also be provided with an acceleration sensor for obtaining whether the user's body is in a weightless state; the emotion management device may also be set with a temperature for obtaining the user's body temperature Sensor; Emotion management equipment can also be a mobile terminal; Emotion management equipment can also be a mobile terminal; Emotion management equipment can also be a device used by an emotion recognition agency to recognize user emotions; Among them, the voice feature data
  • Step S10 after acquiring the user's voice feature data and body feature data, may include:
  • Step a Perform denoising processing on the voice feature data and body feature data.
  • the emotion management device calculates optimized parameters of the voice feature data and the physical feature data, and the optimized parameters include directivity parameters and gain parameters.
  • Step S20 processing the voice feature data and the physical feature data to determine the emotional result of the user
  • the emotion management device processes the user's voice feature data and physical feature data to obtain the user's voice and emotion results.
  • the user emotion result is obtained after processing and calculating the user's voice feature data and the user's physical feature data.
  • Step S30 Perform emotional management on the user according to the user emotional result and preset rules.
  • the emotion management device after the emotion management device obtains the user emotion result, the emotion management device performs emotion management on the user according to the user emotion result and preset rules.
  • Step S30 performing emotional management on the user according to the user emotional result and preset rules may include:
  • Step b Perform emotional management on the user according to the user emotional result and preset music rules.
  • the emotion management device after the emotion management device obtains the user emotion result, the emotion management device performs emotion management on the user according to the user emotion result and preset music rules.
  • the preset music rule can be a rule that adjusts the user's emotions through music according to the obtained user emotional results. For example, when the user's emotional result is angry, play a beautiful piece of music to the user, or play the user's emotions. The volume of the music is adjusted to achieve the rules that ease the user's emotions.
  • Step S30 performing emotional management on the user according to the user emotional result and preset rules may include:
  • Step c Perform emotional management on the user according to the user emotional result and preset coaching rules.
  • the emotion management device after the emotion management device obtains the user emotion result, the emotion management device performs emotion management on the user according to the user emotion result and preset coaching rules.
  • the preset coaching rule can be based on the obtained user emotion results, the emotion management device recommends the coach role corresponding to the user emotion results to the user, and broadcasts a paragraph of language or a paragraph of language story to the user for the attention and environment of professional users. The user's emotions.
  • Step S30 performing emotional management on the user according to the user emotional result and preset rules may include:
  • Step d1 when the user emotion result is a preset emotion result, sending the user emotion result to an emotion management agency;
  • Step d2 receiving emotional intervention information returned by the emotional management agency according to the user emotional result
  • Step d3 Perform emotional management on the user according to the emotional intervention information.
  • the emotion management structure sends the user emotion management result to the emotion management organization, so that the emotion management Institutions to judge and provide opinions on emotional management; after the emotional management agency receives the user emotional results sent by the emotional management device, the emotional management agency processes and judges the user emotional results, and obtains emotional intervention information that is intervened by the emotional management device.
  • the emotional management agency sends the emotional intervention information to the emotional management device, and the emotional management device receives the emotional intervention information returned by the emotional management agency and the user’s emotional results. After the emotional management device receives the emotional intervention information, the emotional management device responds to the user based on the emotional intervention information.
  • the emotional intervention information can be information that stimulates the user’s body to regulate the user’s emotions when the user’s emotional results are more serious, and the emotional intervention information can also be when the emotional results are more serious.
  • This embodiment obtains the user’s voice feature data and physical feature data through the above-mentioned solution; processes the voice feature data and the physical feature data to determine the user’s emotional result; and compares all users according to the user’s emotional result and preset rules.
  • the user performs emotional management. As a result, the function of discovering the user's mood changes in time is realized, and the function of managing and adjusting the user's mood is realized.
  • FIG. 3 is a schematic flowchart of a second embodiment of the emotion management method of this application.
  • the step S20 to process the voice feature data and the physical feature data to determine the emotional result of the user may include:
  • Step S21 processing the voice feature data to obtain a voice emotion result of the user
  • the emotion management device processes the voice feature data of the user to obtain the voice emotion result of the user.
  • the user's voice emotion result is obtained after processing and calculating the user's voice feature data.
  • Step e1 Recognizing keyword information and intonation information included in the voice feature data
  • the emotion management device extracts keyword information and intonation information from the voice feature data, where the intonation information includes at least one of the volume, speech rate, pitch, and respective change trends of the voice data.
  • the word segmentation database can be used to remove meaningless words in the semantic content, and at the same time extract keyword information that can indicate the user's emotion; for the recognized intonation, filter out the intonation information that meets the preset conditions, exemplary, The volume exceeds the maximum preset threshold and the minimum preset threshold is filtered out as a target intonation, or the speech rate exceeds a certain preset threshold is also used as intonation information.
  • Step e2 Generate a voice emotion model according to the keyword information and the intonation information, and match the voice emotion model with the voice standard emotion model in the emotion model library to generate a user voice emotion result.
  • the emotion management device generates a voice emotion model according to the keyword information and intonation information, and matches the voice emotion model with the voice standard emotion model in the emotion model library to generate a user voice emotion result.
  • Step e2 generating a voice emotion model based on the keyword information and the intonation information, and matching the voice emotion model with the voice standard emotion model in the emotion model library to generate a user voice emotion result, which may include:
  • Step e21 Determine voice feature points according to the keyword information and the intonation information
  • the recognized keyword information and intonation information are further analyzed and screened, and keywords and intonations that can clearly indicate the user's emotions are determined as voice feature points.
  • the voice feature points include keyword feature points and intonation. Feature points.
  • the keyword information can be screened through the emotionally sensitive word database established in advance, and the filtered keyword information can be determined as the keyword feature points.
  • the emotionally sensitive word database includes the user's various emotions. Frequently said vocabulary. Since the intonation information is usually displayed in the form of a waveform graph, the points with a more obvious change trend can be used as the characteristic points of intonation, such as the point where the speech speed suddenly increases.
  • Step e22 Generate a voice emotion model according to the voice feature points, and calibrate the voice feature points in the voice emotion model;
  • a voice emotion model is generated according to the determined voice feature points, so as to analyze user emotions according to the voice emotion model.
  • the voice feature points are calibrated on the voice emotion model, where the voice feature points can be the more prominent part of the voice feature points determined in, thereby realizing further screening of the user's emotional characteristics, making the user's emotional characteristics more obvious .
  • Step e23 Match the voice emotion model with the voice standard emotion model in the emotion model library to adjust the calibrated voice feature points on the voice emotion model, and record the voice feature change data of the voice feature points ;
  • the emotion management device matches the voice emotion model with the voice standard emotion model in the emotion model library to adjust the calibrated voice feature points on the voice emotion model for fine-tuning, and record the voice features of the voice feature points Change data.
  • the voice standard emotion model can be established based on the user's daily voice data and the expression corresponding to the daily voice data.
  • Step e22 Match the voice feature change data with the intonation feature data and the psychological behavior feature data in the emotion model library, and generate a user voice emotion result according to the matching result.
  • the user's emotion or emotion change data is output.
  • Step S22 processing the physical feature data to obtain a result of the user's physical emotions
  • the emotion management device processes the physical feature data of the user to obtain the result of the user's physical emotion.
  • the result of the user's physical emotion is obtained after processing and calculating the user's physical feature data.
  • Step S23 verifying the user's voice emotion result according to the user's physical emotion result, and determining the user's emotion result;
  • the emotion management device compares the user's physical emotion result and the user's voice emotion result respectively at the same time point. If the user's physical emotion result at a time point is different from the user voice emotion result, the time point is deleted If the user’s physical emotion result at a time point is the same as the user’s voice emotion result, the user’s voice emotion result at that time point will be retained; when the emotion management device compares the user’s voice emotion result within the preset time interval to After the physical feature time information is all compared one by one, the user voice emotion results after the user voice emotion results with different comparison results are deleted, and the user voice emotion results are searched.
  • This embodiment obtains the user’s voice feature data and physical feature data through the above solution; processes the voice feature data to obtain the user’s voice emotion results; processes the physical feature data to obtain the user’s physical emotion results;
  • the result of the user's body emotion verifies the result of the user's voice emotion, and determines the result of the user's emotion; and the emotion management of the user is performed according to the result of the user's emotion and preset rules.
  • FIG. 4 is a schematic flowchart of a third embodiment of the emotion management method of this application.
  • it may include:
  • Step S40 Obtain emotion-related data of the interlocutor
  • the emotion management device can obtain the emotion-related data of the interlocutor with the user; wherein the emotion-related data may be the interlocutor's voice data, which can be It is the face data of the conversation person, and it can also be the special diagnosis data of the conversation person's body; among them, the emotion management device can be equipped with a camera for obtaining the face data of the conversation person; among them, the voice feature data of the conversation person's face data is through the emotion management device The voice data of the user's speech collected by the microphone of the microphone or the microphone of other collection devices; wherein the voice feature data of the face data of the conversation person is the face data of the conversation person collected through the camera of the emotion management device or the head of the other collecting device.
  • the emotion-related data may be the interlocutor's voice data, which can be It is the face data of the conversation person, and it can also be the special diagnosis data of the conversation person's body; among them, the emotion management device can be equipped with a camera for obtaining the face data of the
  • Step S50 processing the emotion-related data to obtain the emotion result of the interlocutor
  • the emotion management device processes the emotion-related data to obtain the emotion result of the interlocutor.
  • the emotion result of the interlocutor is obtained after processing and calculating the emotion-related data of the interlocutor.
  • Step S40 obtains the emotion-related data of the interlocutor, which may include:
  • Step f1 Obtain the interlocutor's voice data and interlocutor's face data of the interlocutor;
  • the emotion management device after the emotion management device obtains the voice feature data and physical feature data of the user, the emotion management device can obtain the voice data of the conversation person and the face data of the conversation person with the user.
  • Step S50 the processing the emotion-related data to obtain the result of determining the emotion of the interlocutor may include:
  • Step g1 processing the voice data of the interlocutor to obtain the result of the interlocutor's voice emotion
  • the emotion management device processes the interlocutor's voice data of the interlocutor to obtain the result of the interlocutor's voice emotion.
  • the voice emotion result of the interlocutor is obtained after processing and calculating the voice data of the interlocutor.
  • Step g2 processing the face data of the conversation person to obtain the face emotion result of the conversation person;
  • the emotion management device processes the face data of the conversation person to obtain the face emotion result of the conversation person.
  • the face emotion result of the conversation person is obtained after processing and calculating the face data of the conversation person.
  • Step g2 processes the face data of the conversation person to obtain the emotion result of the conversation person's face, which may include:
  • Step g21 Recognizing the face image information of the conversation person included in the face data of the conversation person;
  • the emotion management device extracts the face image information of the conversation person from the face data of the conversation person; wherein the face image information of the conversation person may be used for image information representing the expression of the conversation person, for example, representing the conversation person A happy image, an image that expresses the sadness of the interlocutor, and an image that expresses the anger of the interlocutor; the emotion management device can be used to remove images without user facial expressions in the facial image information of the interlocutor or the facial expressions of the interlocutor due to the rapid rotation or movement of the interlocutor Unclear face image.
  • Step g22 Generate a face emotion model of the conversation person according to the face image information of the conversation person, and match the face emotion model of the conversation person with the standard emotion model of the face in the emotion model library to generate a face emotion result of the conversation person.
  • the emotion management device generates a dialogue face emotion model based on the face image information of the dialogue person, and matches the dialogue face emotion model with the standard emotion model of the face in the emotion model library to generate the dialogue face emotion result.
  • Step g22 generates a face emotion model of the dialogue person according to the face image information of the dialogue person, and matches the face emotion model of the dialogue person with the standard emotion model of the face in the emotion model library to generate a face emotion result of the dialogue person.
  • Step g221 Determine the facial emotion feature points of the conversation person according to the face image information of the conversation person;
  • the emotion management device further analyzes and filters the recognized face image information of the conversation person to determine the face image of the conversation person that can clearly show the expression of the conversation person, that is, determine the face emotion characteristics of the conversation person point.
  • Step g222 generating a facial emotion model of the dialogue person according to the facial emotion feature points of the dialogue person, and marking the facial emotion feature points of the dialogue person on the facial emotion model of the dialogue person;
  • the emotion management device generates a facial emotion model of the dialogue person according to the determined facial emotion feature points of the dialogue person, so as to analyze the emotion of the dialogue person according to the facial emotion model of the dialogue person.
  • the face emotion feature points of the dialogue person are calibrated on the face emotion model of the dialogue person. Among them, the face emotion feature points of the dialogue person can be a more prominent part of the facial emotion feature points determined in The further screening of emotional characteristics makes the emotional characteristics of the interlocutor more obvious.
  • Step g223 Match the face emotion model of the dialog person with the standard emotion model of the face in the emotion model library to adjust the calibrated emotional feature points of the face of the dialog person on the emotion model of the dialog person, and record all The face change data of the dialogue person describing the emotional feature points of the dialogue person's face;
  • the emotion management device will match the facial emotion model of the dialogue person with the standard emotion model of the face in the emotion model library to adjust the calibrated facial emotion feature points of the dialogue person’s facial emotion model. And record the face feature change data of the dialogue person's facial emotion feature points.
  • Step g224 matches the face feature change data of the conversation person with the facial expression feature data in the emotion model library and the psychological behavior feature data, and generates a face emotion result of the conversation person according to the matching result.
  • the emotion management device outputs the facial emotion result of the dialogue person according to the matching result of the dialogue person's facial feature change data of the dialogue person's facial feature points and the expression feature data and mental behavior characteristic data in the emotion model library.
  • Step g3 verifying the voice emotion result of the interlocutor according to the facial emotion result of the interlocutor, and determining the emotion result of the interlocutor.
  • the emotion management device compares the facial emotion result of the conversation person and the speech emotion result of the conversation person at the same time point. If the facial emotion result of the conversation person at a time point is different from the voice emotion result of the conversation person , Delete the conversational person’s voice emotion result at that time point; if the conversational person’s facial emotion result at a time point is the same as the conversation person’s voice emotion result, the conversational person’s voice emotion result at that time point will be retained; The voice emotion results of the interlocutor within the preset time interval and the physical feature time information are all compared one by one, the interlocutor’s voice emotion results after the different interlocutor’s voice emotion results will be deleted and the interlocutor’s voice emotion results will be searched. .
  • step S30 performing emotional management on the user according to the user emotional result and preset rules may include:
  • Step S31 Perform emotional management on the user according to the emotional result of the user, the emotional result of the interlocutor and preset rules.
  • the emotion management device after the emotion management device obtains the emotion result of the conversation person and the emotion result of the user, the emotion management device performs emotion management on the user according to the emotion result of the user, the emotion result of the conversation person, and preset rules.
  • This embodiment obtains the user’s voice feature data and physical feature data through the above solution; obtains the emotion-related data of the conversation person; processes the emotion-related data to obtain the emotion result of the conversation person; compares the voice feature data and the The physical feature data is processed to determine the emotional result of the user; the emotional management of the user is performed according to the emotional result of the user, the emotional result of the interlocutor and preset rules.
  • the function of discovering the user's mood changes in time is realized, and the function of managing and adjusting the user's mood is realized.
  • FIG. 5 is a schematic flowchart of a fourth embodiment of the emotion management method of this application. Based on the embodiment shown in FIG. 2 above, after obtaining the user's voice feature data and body feature data in step S20, it may further include:
  • Step S24 sending the voice feature data and the body feature data to a server
  • Step S25 Receive a user emotional result returned by the server according to the voice feature data and the physical feature data.
  • the emotion management device can send the user's voice feature data and physical feature data to the cloud server for processing; so that the cloud server can receive the voice
  • the cloud server processes the voice feature data and physical feature data.
  • the cloud server obtains the user's emotional results based on the voice feature data and physical feature data, and the cloud server sends the obtained user emotional results to the emotional management device.
  • the emotion management device receives the user emotion result returned by the cloud server according to the voice feature data and the physical feature data.
  • This embodiment obtains the voice feature data and physical feature data of the user through the above-mentioned solution; sends the voice feature data and the physical feature data to the server; receives the server according to the voice feature data and the physical feature data Returned user emotional results; emotional management of the user according to the user emotional results and preset rules.
  • the function of discovering the user's mood changes in time is realized, and the function of managing and adjusting the user's mood is realized.
  • the application also provides an emotion management device.
  • the emotion management device of the present application includes: a memory, a processor, and an emotion management program stored on the memory and capable of running on the processor, and the emotion management program is executed by the processor to realize the emotions described above Steps of the management method.
  • the application also provides a computer-readable storage medium.
  • An emotion management program is stored on the computer-readable storage medium of the present application, and when the emotion management program is executed by a processor, the steps of the above-mentioned emotion management method are realized.
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disks, optical disks), including several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present application.
  • a terminal device which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Child & Adolescent Psychology (AREA)
  • Psychiatry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Hospice & Palliative Care (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Epidemiology (AREA)
  • Developmental Disabilities (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Acoustics & Sound (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

An emotion management method, an emotion management device and a computer-readable storage medium. The method comprises: acquiring voice feature data and body feature data of a user (S10); processing the voice feature data and the body feature data, and determining a user emotion result (S20); and performing emotion management on the user according to the user emotion result and a preset rule (S30). The method can realize the function of discovering user emotion changes, and can realize the functions of managing and adjusting user emotions.

Description

情绪管理方法、设备及计算机可读存储介质Emotion management method, equipment and computer readable storage medium 技术领域Technical field
本申请涉及通信技术领域,尤其涉及情绪管理方法、设备及计算机可读存储介质。This application relates to the field of communication technology, and in particular to emotional management methods, devices, and computer-readable storage media.
背景技术Background technique
随着社会的发展,人们的物质生活越来越丰富,但是,人们的幸福感并没有因为物质生活的满足而不断提升,相反,因为社会竞争所带来的各种压力,越来越多的负面情绪出现在我们的生活之中。据不完全统计,有2亿左右的中国人存在不同程度的心理疾病,保守估计,有3000万中国人患有抑郁症。而这些心理疾病的产生主要是因为大家不注意自己管理自我的情绪,日积月累,最终导致相对严重的心理疾病产生,甚至出现非常恶劣的后果,如自杀。With the development of society, people’s material life has become more and more abundant. However, people’s happiness has not been continuously improved because of the satisfaction of material life. On the contrary, because of the various pressures brought by social competition, more and more Negative emotions appear in our lives. According to incomplete statistics, there are about 200 million Chinese people with varying degrees of mental illness. A conservative estimate is that 30 million Chinese people suffer from depression. The main reason for these mental illnesses is that people do not pay attention to their own management of their own emotions, and they accumulate over time, eventually leading to relatively serious mental illnesses, and even very bad consequences, such as suicide.
目前,对用户情绪的管理方式,一般采用摄像头拍照获取用户的人脸信息,然后用图像识别的方式从上述用户的头像中识别出用户的表情信息,从而获知用户的情绪,但这种识别方式对表情识别的准确度较低,进而导致对用户情绪的判断不够准确,用户体验较差,这种用户情绪管理方式也不能实现对用户情绪进行管理和调节的作用。At present, the management of user emotions generally uses a camera to take pictures to obtain the user's facial information, and then uses image recognition to identify the user's facial information from the above-mentioned user's avatar, so as to know the user's emotions, but this recognition method The accuracy of facial expression recognition is low, which in turn leads to inaccurate judgment of user emotions and poor user experience. This user emotion management method cannot achieve the effect of managing and regulating user emotions.
技术解决方案Technical solutions
本申请的主要目的在于提出一种情绪管理方法、设备及计算机可读存储介质,旨在解决用户情绪不易发现,用户情绪不易管理和调节的技术问题。The main purpose of this application is to propose an emotion management method, device, and computer-readable storage medium, aiming to solve the technical problem that user emotions are not easy to find and user emotions are not easy to manage and adjust.
为实现上述目的,本申请提供一种情绪管理方法,所述情绪管理方法包括如下步骤:In order to achieve the above objective, the present application provides an emotion management method, which includes the following steps:
获取用户的语音特征数据和身体特征数据;Obtain the user's voice feature data and physical feature data;
对所述语音特征数据和所述身体特征数据进行处理,确定用户情绪结果;Processing the voice feature data and the physical feature data to determine the emotional result of the user;
根据所述用户情绪结果和预设规则对所述用户进行情绪管理。Emotion management is performed on the user according to the user emotion result and preset rules.
此外,为实现上述目的,本申请还提供一种设备,所述设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的情绪管理程序,所述情绪管理程序被所述处理器执行时实现如上所述的情绪管理方法的步骤。In addition, in order to achieve the above object, the present application also provides a device, the device including: a memory, a processor, and an emotion management program stored on the memory and capable of running on the processor, the emotion management program When executed by the processor, the steps of the emotion management method as described above are realized.
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有情绪管理程序,所述情绪管理程序被处理器执行时实现如上所述的情绪管理方法的步骤。In addition, in order to achieve the above-mentioned object, the present application also provides a computer-readable storage medium having an emotion management program stored on the computer-readable storage medium, and when the emotion management program is executed by a processor, the emotion management as described above is realized. Method steps.
本申请提供了一种情绪管理方法、系统及计算机可读存储介质,获取用户的语音特征数据和身体特征数据;对所述语音特征数据和所述身体特征数据进行处理,确定用户情绪结果;根据所述用户情绪结果和预设规则对所述用户进行情绪管理。通过上述方式,本申请能够实现发现用户情绪变化的功能,实现管理和调节用户情绪的功能。This application provides an emotion management method, system, and computer-readable storage medium to obtain user voice feature data and physical feature data; process the voice feature data and the physical feature data to determine the user's emotional result; The user emotion results and preset rules perform emotion management on the user. Through the above method, the present application can realize the function of discovering the user's mood change, and realize the function of managing and adjusting the user's mood.
附图说明Description of the drawings
图1是本申请实施例方案涉及的硬件运行环境的终端结构示意图;FIG. 1 is a schematic diagram of a terminal structure of a hardware operating environment involved in a solution of an embodiment of the present application;
图2为本申请情绪管理方法第一实施例的流程示意图;2 is a schematic flowchart of the first embodiment of the emotion management method of this application;
图3为本申请情绪管理方法第二实施例的流程示意图;FIG. 3 is a schematic flowchart of a second embodiment of the emotion management method of this application;
图4为本申请情绪管理方法第三实施例的流程示意图;4 is a schematic flowchart of a third embodiment of the emotion management method of this application;
图5为本申请情绪管理方法第四实施例的流程示意图。FIG. 5 is a schematic flowchart of a fourth embodiment of the emotion management method of this application.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization, functional characteristics, and advantages of the purpose of this application will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
本发明的实施方式Embodiments of the present invention
应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。It should be understood that the specific embodiments described here are only used to explain the application, and are not used to limit the application.
本申请实施例的主要解决方案是:获取用户的语音特征数据和身体特征数据;对所述语音特征数据和所述身体特征数据进行处理,确定用户情绪结果;根据所述用户情绪结果和预设规则对所述用户进行情绪管理。The main solution of the embodiment of this application is: obtain the user's voice feature data and physical feature data; process the voice feature data and the physical feature data to determine the user's emotional result; according to the user's emotional result and preset The rules perform emotional management on the user.
现有的对用户情绪的管理方式,一般采用摄像头拍照获取用户的人脸信息,然后用图像识别的方式从上述用户的头像中识别出用户的表情信息,从而获知用户的情绪,但这种识别方式对表情识别的准确度较低,进而导致对用户情绪的判断不够准确,用户体验较差,这种用户情绪管理方式也不能实现对用户情绪进行管理和调节的作用。The existing management methods of user emotions generally use a camera to take pictures to obtain the user's facial information, and then use image recognition to recognize the user's facial information from the above-mentioned user's avatar, thereby knowing the user's emotions, but this recognition The method has low accuracy in facial expression recognition, which in turn leads to inaccurate judgment of user emotions and poor user experience. This user emotion management method cannot achieve the effect of managing and regulating user emotions.
本申请旨在解决用户情绪不易发现,用户情绪不易管理和调节的技术问题。This application aims to solve the technical problem that user emotions are not easy to find, and user emotions are not easy to manage and adjust.
如图1所示,图1是本申请实施例方案涉及的硬件运行环境的终端结构示意图。As shown in FIG. 1, FIG. 1 is a schematic diagram of a terminal structure of a hardware operating environment involved in a solution of an embodiment of the present application.
本申请实施例终端可以是PC,也可以是智能手机、平板电脑等具有显示功能的可移动式终端设备。The terminal in the embodiment of the present application may be a PC, or a mobile terminal device with a display function, such as a smart phone or a tablet computer.
如图1所示,该终端可以包括:处理器1001,例如CPU,网络接口1004,用户接口1003,存储器1005,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述处理器1001的存储装置。As shown in FIG. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, and a communication bus 1002. Among them, the communication bus 1002 is used to implement connection and communication between these components. The user interface 1003 may include a display screen (Display) and an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface. The network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface). The memory 1005 may be a high-speed RAM memory, or a stable memory (non-volatile memory), such as a magnetic disk memory. Optionally, the memory 1005 may also be a storage device independent of the aforementioned processor 1001.
优选地,终端还可以包括摄像头、RF(Radio Frequency,射频)电路,传感器、音频电路、WiFi模块等等。其中,传感器比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示屏的亮度,接近传感器可在移动终端移动到耳边时,关闭显示屏和/或背光。作为运动传感器的一种,重力加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别移动终端姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;当然,移动终端还可配置陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。Preferably, the terminal may also include a camera, RF (Radio Frequency (radio frequency) circuits, sensors, audio circuits, WiFi modules, etc. Among them, sensors such as light sensors, motion sensors and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor. The ambient light sensor can adjust the brightness of the display screen according to the brightness of the ambient light, and the proximity sensor can turn off the display screen and/or when the mobile terminal is moved to the ear. Backlight. As a kind of motion sensor, the gravity acceleration sensor can detect the magnitude of acceleration in various directions (usually three axes). It can detect the magnitude and direction of gravity when it is stationary. It can be used for applications that recognize the posture of the mobile terminal (such as horizontal and vertical screen switching, Related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer, percussion), etc.; of course, the mobile terminal can also be equipped with other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, etc. No longer.
本领域技术人员可以理解,图1中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。Those skilled in the art can understand that the terminal structure shown in FIG. 1 does not constitute a limitation on the terminal, and may include more or less components than shown in the figure, or combine some components, or arrange different components.
如图1所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及情绪管理程序。As shown in FIG. 1, the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and an emotion management program.
在图1所示的终端中,网络接口1004主要用于连接后台服务器,与后台服务器进行数据通信;用户接口1003主要用于连接客户端(用户端),与客户端进行数据通信;而处理器1001可以用于调用存储器1005中存储的情绪管理程序,并执行以下操作:In the terminal shown in FIG. 1, the network interface 1004 is mainly used to connect to the back-end server and communicate with the back-end server; the user interface 1003 is mainly used to connect to the client (user side) and communicate with the client; and the processor 1001 can be used to call the emotion management program stored in the memory 1005 and perform the following operations:
获取用户的语音特征数据和身体特征数据;对所述语音特征数据和所述身体特征数据进行处理,确定用户情绪结果;根据所述用户情绪结果和预设规则对所述用户进行情绪管理。Acquire voice feature data and physical feature data of the user; process the voice feature data and the physical feature data to determine the user's emotional result; perform emotional management on the user according to the user's emotional result and preset rules.
进一步地,处理器1001可以调用存储器1005中存储的情绪管理程序,还执行以下操作:Further, the processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
根据所述用户情绪结果和预设音乐规则对所述用户进行情绪管理。Emotion management is performed on the user according to the user emotion result and preset music rules.
进一步地,处理器1001可以调用存储器1005中存储的情绪管理程序,还执行以下操作:Further, the processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
根据所述用户情绪结果和预设教练规则对所述用户进行情绪管理。Perform emotional management on the user according to the user emotional result and preset coaching rules.
进一步地,处理器1001可以调用存储器1005中存储的情绪管理程序,还执行以下操作:Further, the processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
当所述用户情绪结果为预设情绪结果时,将所述用户情绪结果发送至情绪管理机构;When the user emotion result is a preset emotion result, sending the user emotion result to an emotion management agency;
接收所述情绪管理机构根据所述用户情绪结果返回的情绪干预信息;Receiving emotional intervention information returned by the emotional management agency according to the user emotional result;
根据情绪干预信息对所述用户进行情绪管理。Emotion management is performed on the user according to the emotional intervention information.
进一步地,处理器1001可以调用存储器1005中存储的情绪管理程序,还执行以下操作:Further, the processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
对所述语音特征数据进行处理,得到用户语音情绪结果;Processing the voice feature data to obtain user voice emotion results;
对所述身体特征数据进行处理,得到用户身体情绪结果;Processing the physical feature data to obtain a result of the user's physical emotions;
根据所述用户身体情绪结果对所述用户语音情绪结果进行验证,确定用户情绪结果。The user's voice emotion result is verified according to the user's physical emotion result, and the user's emotion result is determined.
进一步地,处理器1001可以调用存储器1005中存储的情绪管理程序,还执行以下操作:Further, the processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
获取对话人的情绪相关数据;Obtain the emotion-related data of the interlocutor;
对所述情绪相关数据进行处理,得到对话人情绪结果;Processing the emotion-related data to obtain the emotional result of the interlocutor;
所述根据所述用户情绪结果和预设规则对用户进行情绪管理的步骤,包括:The step of performing emotion management on the user according to the user emotion result and preset rules includes:
根据所述用户情绪结果、所述对话人情绪结果和预设规则对用户进行情绪管理。Emotion management of the user is performed according to the user emotional result, the interlocutor's emotional result and preset rules.
进一步地,处理器1001可以调用存储器1005中存储的情绪管理程序,还执行以下操作:Further, the processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
将所述语音特征数据和所述身体特征数据发送至服务器;Sending the voice feature data and the body feature data to a server;
接收所述服务器根据所述语音特征数据和所述身体特征数据返回的用户情绪结果。Receiving the user's emotional result returned by the server according to the voice feature data and the physical feature data.
进一步地,处理器1001可以调用存储器1005中存储的情绪管理程序,还执行以下操作:Further, the processor 1001 may call the emotion management program stored in the memory 1005, and also perform the following operations:
对所述语音特征数据和身体特征数据进行去噪处理。Denoising processing is performed on the voice feature data and the body feature data.
基于上述硬件结构,提出本申请情绪管理方法实施例。Based on the above hardware structure, an embodiment of the emotion management method of the present application is proposed.
本申请情绪管理方法。The emotional management method of this application.
参照图2,图2为本申请情绪管理方法第一实施例的流程示意图。Referring to FIG. 2, FIG. 2 is a schematic flowchart of the first embodiment of the emotion management method of this application.
本申请实施例中,该情绪管理方法应用于情绪管理设备,所述情绪管理方法包括:In the embodiment of the present application, the emotion management method is applied to an emotion management device, and the emotion management method includes:
步骤S10,获取用户的语音特征数据和身体特征数据;Step S10: Acquire voice feature data and body feature data of the user;
在本实施例中,为了减小用户因为情绪波动给人带来的影响或者是当用户出现情绪波动的时候给给予用户适当的情绪调节,情绪管理设备主动在预设时间间隔获取用户的语音特征数据和身体特征数据。其中,情绪管理设备可以是用户使用的穿戴设备,如智能眼镜、智能手环或无线耳机等;情绪管理设备可以设置有用于获取用户语音特征数据的麦克风;情绪管理设备还可以设置有人体传感器,其中人体传感器用于获取用户的脑电波、皮肤电导率、心率;情绪管理设备还可以设置有用于获取用户身体是否处于失重状态的加速度传感器;情绪管理设备还可以设置有用于获取用户的体温的温度传感器;情绪管理设备也可以移动终端;情绪管理设备还可以是移动终端;情绪管理设备还可以是情绪识别机构用于识别用户情绪的设备;其中,语音特征数据是通过情绪管理设备的麦克风或者其它采集设备的麦克风采集的用户说话的语音数据;其中,身体特征数据可以是通过情绪管理设备的人体传感器、加速度传感器和/或温度传感器采集的用户说话的身体特征的数据;身体特征数据还可以是通过其它采集设备的人体传感器、加速度传感器和/或温度传感器采集的用户说话的身体特征的数据;身体特征数据可以包括:用户的脑电波、用户的皮肤电导率、用户的心率数据、用户的体温数据、用户的血压数据等。其中,预设时间间隔为便于及时判断和检测用户情绪而设置的,可以设置为1s-1min,具体可以设置为3s、4s、5s、6s、10s等。In this embodiment, in order to reduce the influence of the user due to emotional fluctuations or to give the user appropriate emotional adjustment when the user experiences emotional fluctuations, the emotion management device actively acquires the user's voice characteristics at preset time intervals Data and physical characteristics data. Among them, the emotion management device may be a wearable device used by the user, such as smart glasses, smart bracelet or wireless earphones, etc.; the emotion management device may be provided with a microphone for acquiring user voice characteristic data; the emotion management device may also be provided with a human sensor, The human body sensor is used to obtain the user's brain waves, skin conductivity, and heart rate; the emotion management device may also be provided with an acceleration sensor for obtaining whether the user's body is in a weightless state; the emotion management device may also be set with a temperature for obtaining the user's body temperature Sensor; Emotion management equipment can also be a mobile terminal; Emotion management equipment can also be a mobile terminal; Emotion management equipment can also be a device used by an emotion recognition agency to recognize user emotions; Among them, the voice feature data is through the microphone of the emotion management device or other The voice data of the user's speech collected by the microphone of the collection device; wherein the physical feature data can be the data of the physical feature of the user's speech collected by the human body sensor, acceleration sensor and/or temperature sensor of the emotion management device; the physical feature data can also be Data of the user’s speaking body characteristics collected by the human body sensor, acceleration sensor and/or temperature sensor of other collection equipment; the body characteristic data may include: the user’s brain waves, the user’s skin conductivity, the user’s heart rate data, and the user’s body temperature Data, user's blood pressure data, etc. Among them, the preset time interval is set to facilitate timely judgment and detection of user emotions, and can be set to 1s-1min, and specifically can be set to 3s, 4s, 5s, 6s, 10s, etc.
步骤S10获取用户的语音特征数据和身体特征数据之后,可以包括:Step S10, after acquiring the user's voice feature data and body feature data, may include:
步骤a,对所述语音特征数据和身体特征数据进行去噪处理。Step a: Perform denoising processing on the voice feature data and body feature data.
在本实施例中,情绪管理设备计算语音特征数据和身体特征数据据的优化参数,该优化参数包括:指向性参数和增益参数。In this embodiment, the emotion management device calculates optimized parameters of the voice feature data and the physical feature data, and the optimized parameters include directivity parameters and gain parameters.
步骤S20,对所述语音特征数据和所述身体特征数据进行处理,确定用户情绪结果;Step S20, processing the voice feature data and the physical feature data to determine the emotional result of the user;
在本实施例中,情绪管理设备在获取到用户的语音特征数据和身体特征数据之后,情绪管理设备对用户的语音特征数据和身体特征数据进行处理,得到用户语音情绪结果。其中,用户情绪结果是对用户的语音特征数据和用户的身体特征数据进行处理和计算之后得到的。In this embodiment, after the emotion management device obtains the user's voice feature data and physical feature data, the emotion management device processes the user's voice feature data and physical feature data to obtain the user's voice and emotion results. Among them, the user emotion result is obtained after processing and calculating the user's voice feature data and the user's physical feature data.
步骤S30,根据所述用户情绪结果和预设规则对所述用户进行情绪管理。Step S30: Perform emotional management on the user according to the user emotional result and preset rules.
在本实施例中,当情绪管理设备得到用户情绪结果之后,情绪管理设备根据用户情绪结果和预设规则对用户进行情绪管理。In this embodiment, after the emotion management device obtains the user emotion result, the emotion management device performs emotion management on the user according to the user emotion result and preset rules.
例如,当用户的用户语音情绪结果为高兴时,则不对用户进行情绪管理;当用户的用户语音情绪结果为愤怒时,给用户播放一首优美的音乐,或者给用户讲一则寓言故事,转移用户的注意力;当用户的用户语音情绪结果为悲伤时,给用讲一个笑话。For example, when the user’s user voice emotion result is happy, the user will not be emotionally managed; when the user’s user voice emotion result is angry, play a beautiful music to the user, or tell the user a fable, transfer The user's attention; when the user's user voice emotion results in sadness, tell a joke to the user.
步骤S30根据所述用户情绪结果和预设规则对所述用户进行情绪管理,可以包括:Step S30 performing emotional management on the user according to the user emotional result and preset rules may include:
步骤b,根据所述用户情绪结果和预设音乐规则对所述用户进行情绪管理。Step b: Perform emotional management on the user according to the user emotional result and preset music rules.
在本实施例中,当情绪管理设备得到用户情绪结果之后,情绪管理设备根据用户情绪结果和预设音乐规则对用户进行情绪管理。其中,预设音乐规则可以是根据得到的用户情绪结果,通过音乐的方式对用户情绪进行调节的规则,例如当用户情绪结果为愤怒时,给用户播放一首优美的音乐,或者将用户播放的音乐的音量进行调节,以达到缓解用户情绪的规则。In this embodiment, after the emotion management device obtains the user emotion result, the emotion management device performs emotion management on the user according to the user emotion result and preset music rules. Among them, the preset music rule can be a rule that adjusts the user's emotions through music according to the obtained user emotional results. For example, when the user's emotional result is angry, play a beautiful piece of music to the user, or play the user's emotions. The volume of the music is adjusted to achieve the rules that ease the user's emotions.
步骤S30根据所述用户情绪结果和预设规则对所述用户进行情绪管理,可以包括:Step S30 performing emotional management on the user according to the user emotional result and preset rules may include:
步骤c,根据所述用户情绪结果和预设教练规则对所述用户进行情绪管理。Step c: Perform emotional management on the user according to the user emotional result and preset coaching rules.
在本实施例中,当情绪管理设备得到用户情绪结果之后,情绪管理设备根据用户情绪结果和预设教练规则对用户进行情绪管理。其中,预设教练规则可以是根据得到的用户情绪结果,情绪管理设备给用户推荐与用户情绪结果对应的教练的角色,给用户播报一段语言或一段语言故事,用于专业用户的注意力,环境用户的情绪。In this embodiment, after the emotion management device obtains the user emotion result, the emotion management device performs emotion management on the user according to the user emotion result and preset coaching rules. Among them, the preset coaching rule can be based on the obtained user emotion results, the emotion management device recommends the coach role corresponding to the user emotion results to the user, and broadcasts a paragraph of language or a paragraph of language story to the user for the attention and environment of professional users. The user's emotions.
步骤S30根据所述用户情绪结果和预设规则对所述用户进行情绪管理,可以包括:Step S30 performing emotional management on the user according to the user emotional result and preset rules may include:
步骤d1,当所述用户情绪结果为预设情绪结果时,将所述用户情绪结果发送至情绪管理机构;Step d1, when the user emotion result is a preset emotion result, sending the user emotion result to an emotion management agency;
步骤d2,接收所述情绪管理机构根据所述用户情绪结果返回的情绪干预信息;Step d2, receiving emotional intervention information returned by the emotional management agency according to the user emotional result;
步骤d3,根据情绪干预信息对所述用户进行情绪管理。Step d3: Perform emotional management on the user according to the emotional intervention information.
在本实施例中,当用户情绪结果为极端的情绪结果时,或者当用户情绪结果为情绪管理设备不能处理情绪结果时,情绪管理结构将用户情绪管理结果发送至情绪管理机构,以使得情绪管理机构来判断和提供情绪管理的意见;情绪管理机构在接收到情绪管理设备发送的用户情绪结果之后,情绪管理机构对用户情绪结果进行处理和判断,得到通过情绪管理设备进行干预的情绪干预信息,情绪管理机构将情绪干预信息发送至情绪管理设备,情绪管理设备接收情绪管理机构跟用户情绪结果返回的情绪干预信息,情绪管理设备在接收到情绪干预信息之后,情绪管理设备根据情绪干预信息对用户进行情绪管理,调节用户的情绪;其中,情绪干预信息可以是在用户情绪结果较为严重的时候,通过刺激用户的身体调节用户情绪的信息,情绪干预信息还可以是在情绪结果较为严重的时候,推荐情绪管理设备给相应的情绪治疗机构联系,推荐给用户家属将用户转移到情绪治疗机构进行治疗。In this embodiment, when the user emotion result is an extreme emotion result, or when the user emotion result is that the emotion management device cannot handle the emotion result, the emotion management structure sends the user emotion management result to the emotion management organization, so that the emotion management Institutions to judge and provide opinions on emotional management; after the emotional management agency receives the user emotional results sent by the emotional management device, the emotional management agency processes and judges the user emotional results, and obtains emotional intervention information that is intervened by the emotional management device. The emotional management agency sends the emotional intervention information to the emotional management device, and the emotional management device receives the emotional intervention information returned by the emotional management agency and the user’s emotional results. After the emotional management device receives the emotional intervention information, the emotional management device responds to the user based on the emotional intervention information. Emotion management is carried out to regulate the user’s emotions; among them, the emotional intervention information can be information that stimulates the user’s body to regulate the user’s emotions when the user’s emotional results are more serious, and the emotional intervention information can also be when the emotional results are more serious. Recommend the emotion management equipment to the corresponding emotion treatment institution, and recommend to the family of the user to transfer the user to the emotion treatment institution for treatment.
本实施例通过上述方案,获取用户的语音特征数据和身体特征数据;对所述语音特征数据和所述身体特征数据进行处理,确定用户情绪结果;根据所述用户情绪结果和预设规则对所述用户进行情绪管理。由此,实现了及时发现用户情绪变化的功能,实现了管理和调节用户情绪的功能。This embodiment obtains the user’s voice feature data and physical feature data through the above-mentioned solution; processes the voice feature data and the physical feature data to determine the user’s emotional result; and compares all users according to the user’s emotional result and preset rules. The user performs emotional management. As a result, the function of discovering the user's mood changes in time is realized, and the function of managing and adjusting the user's mood is realized.
进一步地,参照图3,图3为本申请情绪管理方法第二实施例的流程示意图。基于上述图2所示的实施例,步骤S20对所述语音特征数据和所述身体特征数据进行处理,确定用户情绪结果,可以包括:Further, referring to FIG. 3, FIG. 3 is a schematic flowchart of a second embodiment of the emotion management method of this application. Based on the embodiment shown in FIG. 2 above, the step S20 to process the voice feature data and the physical feature data to determine the emotional result of the user may include:
步骤S21,对所述语音特征数据进行处理,得到用户语音情绪结果;Step S21, processing the voice feature data to obtain a voice emotion result of the user;
在本实施例中,情绪管理设备在获取到用户的语音特征数据之后,情绪管理设备对用户的语音特征数据进行处理,得到用户语音情绪结果。其中,用户语音情绪结果是对用户语音特征数据进行处理和计算之后得到的。In this embodiment, after the emotion management device obtains the voice feature data of the user, the emotion management device processes the voice feature data of the user to obtain the voice emotion result of the user. Among them, the user's voice emotion result is obtained after processing and calculating the user's voice feature data.
步骤e1,识别所述语音特征数据中包括的关键词信息和语调信息;Step e1: Recognizing keyword information and intonation information included in the voice feature data;
在本实施例中,情绪管理设备从语音特征数据中提取关键词信息和语调信息,其中,所述语调信息包括语音数据的音量、语速、音调以及各自的变化趋势中的至少一种。示例性的,可利用分词词库去除语义内容中的无意义的词,同时提取能表明用户情绪的关键词信息;对于识别的语调,筛选其中满足预设条件的作为语调信息,示例性的,将音量超过最大预设阈值和低于最小预设阈值的筛选出来作为一种目标语调,或将语速超过某一预设阈值的也作为语调信息。In this embodiment, the emotion management device extracts keyword information and intonation information from the voice feature data, where the intonation information includes at least one of the volume, speech rate, pitch, and respective change trends of the voice data. Exemplarily, the word segmentation database can be used to remove meaningless words in the semantic content, and at the same time extract keyword information that can indicate the user's emotion; for the recognized intonation, filter out the intonation information that meets the preset conditions, exemplary, The volume exceeds the maximum preset threshold and the minimum preset threshold is filtered out as a target intonation, or the speech rate exceeds a certain preset threshold is also used as intonation information.
步骤e2,根据所述关键词信息和所述语调信息生成语音情绪模型,并将所述语音情绪模型与情绪模型库中语音标准情绪模型进行匹配生成用户语音情绪结果。Step e2: Generate a voice emotion model according to the keyword information and the intonation information, and match the voice emotion model with the voice standard emotion model in the emotion model library to generate a user voice emotion result.
在本实施例中,情绪管理设备根据关键词信息和语调信息生成语音情绪模型,并将语音情绪模型与情绪模型库中语音标准情绪模型进行匹配生成用户语音情绪结果。In this embodiment, the emotion management device generates a voice emotion model according to the keyword information and intonation information, and matches the voice emotion model with the voice standard emotion model in the emotion model library to generate a user voice emotion result.
步骤e2,根据所述关键词信息和所述语调信息生成语音情绪模型,并将所述语音情绪模型与情绪模型库中语音标准情绪模型进行匹配生成用户语音情绪结果,可以包括:Step e2, generating a voice emotion model based on the keyword information and the intonation information, and matching the voice emotion model with the voice standard emotion model in the emotion model library to generate a user voice emotion result, which may include:
步骤e21,根据所述关键词信息和所述语调信息确定语音特征点;Step e21: Determine voice feature points according to the keyword information and the intonation information;
在本实施例中,对识别到的关键词信息和语调信息进一步分析和筛选,确定其中能够明显表明用户情绪的关键词和语调作为语音特征点,其中,语音特征点包括关键词特征点和语调特征点。示例性的,可通过提前建立的情绪敏感词词库对关键词信息进行筛选,并将筛选出的关键词信息确定为关键词特征点,其中,情绪敏感词词库包括用户各种不同情绪下常说的词汇。由于语调信息通常是以波形图的形式展示的,因此可将变化趋势比较明显的点作为语调特征点,例如语速突然加快的点。In this embodiment, the recognized keyword information and intonation information are further analyzed and screened, and keywords and intonations that can clearly indicate the user's emotions are determined as voice feature points. The voice feature points include keyword feature points and intonation. Feature points. Exemplarily, the keyword information can be screened through the emotionally sensitive word database established in advance, and the filtered keyword information can be determined as the keyword feature points. The emotionally sensitive word database includes the user's various emotions. Frequently said vocabulary. Since the intonation information is usually displayed in the form of a waveform graph, the points with a more obvious change trend can be used as the characteristic points of intonation, such as the point where the speech speed suddenly increases.
步骤e22,根据所述语音特征点生成语音情绪模型,并在所述语音情绪模型中标定所述语音特征点;Step e22: Generate a voice emotion model according to the voice feature points, and calibrate the voice feature points in the voice emotion model;
在本实施例中,根据确定的语音特征点生成语音情绪模型,以便根据语音情绪模型分析用户情绪。在语音情绪模型上标定语音特征点,其中,语音特征点可以是中确定的语音特征点中的特征更突出的一部分,由此实现了对用户情绪特征的进一步筛选,使得用户的情绪特征更加明显。In this embodiment, a voice emotion model is generated according to the determined voice feature points, so as to analyze user emotions according to the voice emotion model. The voice feature points are calibrated on the voice emotion model, where the voice feature points can be the more prominent part of the voice feature points determined in, thereby realizing further screening of the user's emotional characteristics, making the user's emotional characteristics more obvious .
步骤e23,将所述语音情绪模型与情绪模型库中语音标准情绪模型进行匹配,以调整所述语音情绪模型上已标定的所述语音特征点,并记录所述语音特征点的语音特征变化数据;Step e23: Match the voice emotion model with the voice standard emotion model in the emotion model library to adjust the calibrated voice feature points on the voice emotion model, and record the voice feature change data of the voice feature points ;
在本实施例中,情绪管理设备将所述语音情绪模型与情绪模型库中语音标准情绪模型进行匹配,以调整语音情绪模型上已标定的语音特征点进行微调,并记录语音特征点的语音特征变化数据。语音标准情绪模型可以根据用户的日常语音数据和日常语音数据对应的表情而建立的。In this embodiment, the emotion management device matches the voice emotion model with the voice standard emotion model in the emotion model library to adjust the calibrated voice feature points on the voice emotion model for fine-tuning, and record the voice features of the voice feature points Change data. The voice standard emotion model can be established based on the user's daily voice data and the expression corresponding to the daily voice data.
步骤e22,将所述语音特征变化数据与情绪模型库中的语调特征数据和心理行为特征数据进行匹配,并根据匹配结果生成用户语音情绪结果。Step e22: Match the voice feature change data with the intonation feature data and the psychological behavior feature data in the emotion model library, and generate a user voice emotion result according to the matching result.
在本实施例中,根据语音特征点的语音特征变化数据与情绪模型库中的语调特征数据和心理行为特征数据的匹配结果,输出用户情绪或情绪变化数据。In this embodiment, according to the matching result of the voice feature change data of the voice feature point with the intonation feature data and the mental behavior feature data in the emotion model library, the user's emotion or emotion change data is output.
步骤S22,对所述身体特征数据进行处理,得到用户身体情绪结果;Step S22, processing the physical feature data to obtain a result of the user's physical emotions;
在本实施例中,情绪管理设备在获取到用户的身体特征数据之后,情绪管理设备对用户的身体特征数据进行处理,得到用户身体情绪结果。其中,用户身体情绪结果是对用户的身体特征数据进行处理和计算之后得到的。In this embodiment, after the emotion management device obtains the physical feature data of the user, the emotion management device processes the physical feature data of the user to obtain the result of the user's physical emotion. Among them, the result of the user's physical emotion is obtained after processing and calculating the user's physical feature data.
步骤S23,根据所述用户身体情绪结果对所述用户语音情绪结果进行验证,确定用户情绪结果;Step S23, verifying the user's voice emotion result according to the user's physical emotion result, and determining the user's emotion result;
在本实施例中,情绪管理设备将用户身体情绪结果和用户语音情绪结果按照同一个时间点分别进行比对,如果一个时间点的用户身体情绪结果与用户语音情绪结果不同,则删除该时间点的用户语音情绪结果;如果一个时间点的用户身体情绪结果与用户语音情绪结果相同,则保留该时间点的用户语音情绪结果;当情绪管理设备在将预设时间间隔内的用户语音情绪结果和身体特征时间信息全部一一比对完成之后,将删除比对结果不同用户语音情绪结果之后的用户语音情绪结果保留并查找用户语音情绪结果。In this embodiment, the emotion management device compares the user's physical emotion result and the user's voice emotion result respectively at the same time point. If the user's physical emotion result at a time point is different from the user voice emotion result, the time point is deleted If the user’s physical emotion result at a time point is the same as the user’s voice emotion result, the user’s voice emotion result at that time point will be retained; when the emotion management device compares the user’s voice emotion result within the preset time interval to After the physical feature time information is all compared one by one, the user voice emotion results after the user voice emotion results with different comparison results are deleted, and the user voice emotion results are searched.
本实施例通过上述方案,获取用户的语音特征数据和身体特征数据;对所述语音特征数据进行处理,得到用户语音情绪结果;对所述身体特征数据进行处理,得到用户身体情绪结果;根据所述用户身体情绪结果对所述用户语音情绪结果进行验证,确定用户情绪结果;根据所述用户情绪结果和预设规则对所述用户进行情绪管理。由此,实现了及时发现用户情绪变化的功能,实现了管理和调节用户情绪的功能。This embodiment obtains the user’s voice feature data and physical feature data through the above solution; processes the voice feature data to obtain the user’s voice emotion results; processes the physical feature data to obtain the user’s physical emotion results; The result of the user's body emotion verifies the result of the user's voice emotion, and determines the result of the user's emotion; and the emotion management of the user is performed according to the result of the user's emotion and preset rules. As a result, the function of discovering the user's mood changes in time is realized, and the function of managing and adjusting the user's mood is realized.
进一步地,参照图4,图4为本申请情绪管理方法第三实施例的流程示意图。为了更加准确的管理用户的情绪,基于上述图3所示的实施例,步骤S10获取用户的语音特征数据和身体特征数据之后,可以包括:Further, referring to FIG. 4, FIG. 4 is a schematic flowchart of a third embodiment of the emotion management method of this application. In order to manage the user's emotions more accurately, based on the embodiment shown in FIG. 3 above, after obtaining the user's voice feature data and physical feature data in step S10, it may include:
步骤S40,获取对话人的情绪相关数据;Step S40: Obtain emotion-related data of the interlocutor;
在本实施例中,情绪管理设备获取到用户的语音特征数据和身体特征数据之后,情绪管理设备可以获取与用户对话的对话人的情绪相关数据;其中情绪相关数据可以是对话人语音数据,可以是对话人人脸数据,还可以是对话人身体特诊数据;其中,情绪管理设备可以设置有用于获取对话人人脸数据的摄像头;其中,对话人人脸数据语音特征数据是通过情绪管理设备的麦克风或者其它采集设备的麦克风采集的用户说话的语音数据;其中,对话人人脸数据语音特征数据是通过情绪管理设备的摄像头或者其它采集设备的头采集的对话人的人脸数据。In this embodiment, after the emotion management device obtains the user's voice feature data and physical feature data, the emotion management device can obtain the emotion-related data of the interlocutor with the user; wherein the emotion-related data may be the interlocutor's voice data, which can be It is the face data of the conversation person, and it can also be the special diagnosis data of the conversation person's body; among them, the emotion management device can be equipped with a camera for obtaining the face data of the conversation person; among them, the voice feature data of the conversation person's face data is through the emotion management device The voice data of the user's speech collected by the microphone of the microphone or the microphone of other collection devices; wherein the voice feature data of the face data of the conversation person is the face data of the conversation person collected through the camera of the emotion management device or the head of the other collecting device.
步骤S50,对所述情绪相关数据进行处理,得到对话人情绪结果;Step S50, processing the emotion-related data to obtain the emotion result of the interlocutor;
在本实施例中,情绪管理设备在获取到对话人的情绪相关数据之后,情绪管理设备对情绪相关数据进行处理,得到对话人情绪结果。其中,对话人情绪结果是对对话人的情绪相关数据进行处理和计算之后得到的。In this embodiment, after the emotion management device obtains the emotion-related data of the interlocutor, the emotion management device processes the emotion-related data to obtain the emotion result of the interlocutor. Among them, the emotion result of the interlocutor is obtained after processing and calculating the emotion-related data of the interlocutor.
步骤S40获取对话人的情绪相关数据,可以包括:Step S40 obtains the emotion-related data of the interlocutor, which may include:
步骤f1,获取对话人的对话人语音数据和对话人人脸数据;Step f1: Obtain the interlocutor's voice data and interlocutor's face data of the interlocutor;
在本实施例中,情绪管理设备获取到用户的语音特征数据和身体特征数据之后,情绪管理设备可以获取与用户对话的对话人语音数据和对话人人脸数据。In this embodiment, after the emotion management device obtains the voice feature data and physical feature data of the user, the emotion management device can obtain the voice data of the conversation person and the face data of the conversation person with the user.
步骤S50,所述对所述情绪相关数据进行处理,得到确定对话人情绪结果,可以包括:Step S50, the processing the emotion-related data to obtain the result of determining the emotion of the interlocutor may include:
步骤g1,对所述对话人语音数据进行处理,得到对话人语音情绪结果;Step g1, processing the voice data of the interlocutor to obtain the result of the interlocutor's voice emotion;
在本实施例中,情绪管理设备在获取到对话人的对话人语音数据之后,情绪管理设备对对话人的对话人语音数据进行处理,得到对话人语音情绪结果。其中,对话人语音情绪结果是对对话人语音数据进行处理和计算之后得到的。In this embodiment, after the emotion management device obtains the interlocutor's voice data of the interlocutor, the emotion management device processes the interlocutor's voice data of the interlocutor to obtain the result of the interlocutor's voice emotion. Among them, the voice emotion result of the interlocutor is obtained after processing and calculating the voice data of the interlocutor.
步骤g2,对所述对话人人脸数据进行处理,得到对话人人脸情绪结果;Step g2, processing the face data of the conversation person to obtain the face emotion result of the conversation person;
在本实施例中,情绪管理设备在获取到对话人的对话人人脸数据之后,情绪管理设备对对话人的对话人人脸数据进行处理,得到对话人人脸情绪结果。其中,对话人人脸情绪结果是对对话人人脸数据进行处理和计算之后得到的。In this embodiment, after the emotion management device obtains the face data of the conversation person, the emotion management device processes the face data of the conversation person to obtain the face emotion result of the conversation person. Among them, the face emotion result of the conversation person is obtained after processing and calculating the face data of the conversation person.
步骤g2对所述对话人人脸数据进行处理,得到对话人人脸情绪结果,可以包括:Step g2 processes the face data of the conversation person to obtain the emotion result of the conversation person's face, which may include:
步骤g21,识别所述对话人人脸数据中包含的对话人人脸图像信息;Step g21: Recognizing the face image information of the conversation person included in the face data of the conversation person;
在本实施例中,情绪管理设备从对话人人脸数据中提取对话人人脸图像信息;其中,所述对话人人脸图像信息可以用于表现对话人表情的图像信息,例如,表示对话人快乐的图像,表示对话人悲伤的图像,表示对话人愤怒的图像;可利用情绪管理设备去除对话人人脸图像信息中的无用户面部表情的图像或者由于对话人快速转动或者移动对话人面部表情不清晰的人脸图像。In this embodiment, the emotion management device extracts the face image information of the conversation person from the face data of the conversation person; wherein the face image information of the conversation person may be used for image information representing the expression of the conversation person, for example, representing the conversation person A happy image, an image that expresses the sadness of the interlocutor, and an image that expresses the anger of the interlocutor; the emotion management device can be used to remove images without user facial expressions in the facial image information of the interlocutor or the facial expressions of the interlocutor due to the rapid rotation or movement of the interlocutor Unclear face image.
步骤g22,根据所述对话人人脸图像信息生成对话人人脸情绪模型,并将所述对话人人脸情绪模型与情绪模型库中人脸标准情绪模型进行匹配生成对话人人脸情绪结果。Step g22: Generate a face emotion model of the conversation person according to the face image information of the conversation person, and match the face emotion model of the conversation person with the standard emotion model of the face in the emotion model library to generate a face emotion result of the conversation person.
在本实施例中,情绪管理设备根据对话人人脸图像信息生成对话人人脸情绪模型,并将对话人人脸情绪模型与情绪模型库中人脸标准情绪模型进行匹配生成对话人人脸情绪结果。In this embodiment, the emotion management device generates a dialogue face emotion model based on the face image information of the dialogue person, and matches the dialogue face emotion model with the standard emotion model of the face in the emotion model library to generate the dialogue face emotion result.
步骤g22根据所述对话人人脸图像信息生成对话人人脸情绪模型,并将所述对话人人脸情绪模型与情绪模型库中人脸标准情绪模型进行匹配生成对话人人脸情绪结果,可以包括:Step g22 generates a face emotion model of the dialogue person according to the face image information of the dialogue person, and matches the face emotion model of the dialogue person with the standard emotion model of the face in the emotion model library to generate a face emotion result of the dialogue person. include:
步骤g221,根据所述对话人人脸图像信息确定对话人人脸情绪特征点;Step g221: Determine the facial emotion feature points of the conversation person according to the face image information of the conversation person;
在本实施例中,情绪管理设备对识别到的对话人人脸图像信息进一步分析和筛选,确定其中能够明显表明对话人表情的对话人人脸图像,即确定对话人的对话人人脸情绪特征点。In this embodiment, the emotion management device further analyzes and filters the recognized face image information of the conversation person to determine the face image of the conversation person that can clearly show the expression of the conversation person, that is, determine the face emotion characteristics of the conversation person point.
步骤g222,根据所述对话人人脸情绪特征点生成对话人人脸情绪模型,并在所述对话人人脸情绪模型上标定对话人人脸情绪特征点;Step g222, generating a facial emotion model of the dialogue person according to the facial emotion feature points of the dialogue person, and marking the facial emotion feature points of the dialogue person on the facial emotion model of the dialogue person;
在本实施例中,情绪管理设备根据确定的对话人人脸情绪特征点生成对话人人脸情绪模型,以便根据对话人人脸情绪模型分析对话人情绪。在对话人人脸情绪模型上标定对话人人脸情绪特征点,其中,对话人人脸情绪特征点可以是中确定的人脸情绪特征点中的特征更突出的一部分,由此实现了对用户情绪特征的进一步筛选,使得对话人的情绪特征更加明显。In this embodiment, the emotion management device generates a facial emotion model of the dialogue person according to the determined facial emotion feature points of the dialogue person, so as to analyze the emotion of the dialogue person according to the facial emotion model of the dialogue person. The face emotion feature points of the dialogue person are calibrated on the face emotion model of the dialogue person. Among them, the face emotion feature points of the dialogue person can be a more prominent part of the facial emotion feature points determined in The further screening of emotional characteristics makes the emotional characteristics of the interlocutor more obvious.
步骤g223,将对话人人脸情绪模型与情绪模型库中人脸标准情绪模型进行匹配,以调整所述对话人人脸情绪模型上已标定的所述对话人人脸情绪特征点,并记录所述对话人人脸情绪特征点的对话人人脸变化数据;Step g223: Match the face emotion model of the dialog person with the standard emotion model of the face in the emotion model library to adjust the calibrated emotional feature points of the face of the dialog person on the emotion model of the dialog person, and record all The face change data of the dialogue person describing the emotional feature points of the dialogue person's face;
在本实施例中,情绪管理设备将将对话人人脸情绪模型与情绪模型库中人脸标准情绪模型进行匹配,以调整对话人人脸情绪模型上已标定的对话人人脸情绪特征点,并记录对话人人脸情绪特征点的对话人人脸特征变化数据。In this embodiment, the emotion management device will match the facial emotion model of the dialogue person with the standard emotion model of the face in the emotion model library to adjust the calibrated facial emotion feature points of the dialogue person’s facial emotion model. And record the face feature change data of the dialogue person's facial emotion feature points.
步骤g224将所述对话人人脸特征变化数据与情绪模型库中的表情特征数据和所述心理行为特征数据进行匹配,并根据匹配结果生成对话人人脸情绪结果。Step g224 matches the face feature change data of the conversation person with the facial expression feature data in the emotion model library and the psychological behavior feature data, and generates a face emotion result of the conversation person according to the matching result.
在本实施例中,情绪管理设备根据对话人人脸特征点的对话人人脸特征变化数据与情绪模型库中的表情特征数据和心理行为特征数据的匹配结果,输出对话人人脸情绪结果。In this embodiment, the emotion management device outputs the facial emotion result of the dialogue person according to the matching result of the dialogue person's facial feature change data of the dialogue person's facial feature points and the expression feature data and mental behavior characteristic data in the emotion model library.
步骤g3,根据所述对话人人脸情绪结果对所述对话人语音情绪结果进行验证,确定对话人情绪结果。Step g3, verifying the voice emotion result of the interlocutor according to the facial emotion result of the interlocutor, and determining the emotion result of the interlocutor.
在本实施例中,情绪管理设备将对话人人脸情绪结果和对话人语音情绪结果按照同一个时间点分别进行比对,如果一个时间点的对话人人脸情绪结果与对话人语音情绪结果不同,则删除该时间点的对话人语音情绪结果;如果一个时间点的对话人人脸情绪结果与对话人语音情绪结果相同,则保留该时间点的对话人语音情绪结果;当情绪管理设备在将预设时间间隔内的对话人语音情绪结果和身体特征时间信息全部一一比对完成之后,将删除比对结果不同对话人语音情绪结果之后的对话人语音情绪结果保留并查找对话人语音情绪结果。In this embodiment, the emotion management device compares the facial emotion result of the conversation person and the speech emotion result of the conversation person at the same time point. If the facial emotion result of the conversation person at a time point is different from the voice emotion result of the conversation person , Delete the conversational person’s voice emotion result at that time point; if the conversational person’s facial emotion result at a time point is the same as the conversation person’s voice emotion result, the conversational person’s voice emotion result at that time point will be retained; The voice emotion results of the interlocutor within the preset time interval and the physical feature time information are all compared one by one, the interlocutor’s voice emotion results after the different interlocutor’s voice emotion results will be deleted and the interlocutor’s voice emotion results will be searched. .
步骤S30所述根据所述用户情绪结果和预设规则对用户进行情绪管理,可以包括:In step S30, performing emotional management on the user according to the user emotional result and preset rules may include:
步骤S31,根据所述用户情绪结果、所述对话人情绪结果和预设规则对用户进行情绪管理。Step S31: Perform emotional management on the user according to the emotional result of the user, the emotional result of the interlocutor and preset rules.
在本实施例中,当情绪管理设备得到对话人情绪结果和用户情绪结果之后,情绪管理设备根据用户情绪结果、对话人情绪结果和预设规则对用户进行情绪管理。In this embodiment, after the emotion management device obtains the emotion result of the conversation person and the emotion result of the user, the emotion management device performs emotion management on the user according to the emotion result of the user, the emotion result of the conversation person, and preset rules.
本实施例通过上述方案,获取用户的语音特征数据和身体特征数据;获取对话人的情绪相关数据;对所述情绪相关数据进行处理,得到对话人情绪结果;对所述语音特征数据和所述身体特征数据进行处理,确定用户情绪结果;根据所述用户情绪结果、所述对话人情绪结果和预设规则对用户进行情绪管理。由此,实现了及时发现用户情绪变化的功能,实现了管理和调节用户情绪的功能。This embodiment obtains the user’s voice feature data and physical feature data through the above solution; obtains the emotion-related data of the conversation person; processes the emotion-related data to obtain the emotion result of the conversation person; compares the voice feature data and the The physical feature data is processed to determine the emotional result of the user; the emotional management of the user is performed according to the emotional result of the user, the emotional result of the interlocutor and preset rules. As a result, the function of discovering the user's mood changes in time is realized, and the function of managing and adjusting the user's mood is realized.
进一步地,参照图5,图5为本申请情绪管理方法第四实施例的流程示意图。基于上述图2所示的实施例,步骤S20获取用户的语音特征数据和身体特征数据之后,还可以包括:Further, referring to FIG. 5, FIG. 5 is a schematic flowchart of a fourth embodiment of the emotion management method of this application. Based on the embodiment shown in FIG. 2 above, after obtaining the user's voice feature data and body feature data in step S20, it may further include:
步骤S24,将所述语音特征数据和所述身体特征数据发送至服务器;Step S24, sending the voice feature data and the body feature data to a server;
步骤S25,接收所述服务器根据所述语音特征数据和所述身体特征数据返回的用户情绪结果。Step S25: Receive a user emotional result returned by the server according to the voice feature data and the physical feature data.
在本实施例中,情绪管理设备在得到用户的语音特征数据和身体特征数据之后,情绪管理设备可以将用户的语音特征数据和身体特征数据发送至云端服务器进行;以便于云端服务器在接收到语音特征数据和身体特征数据之后,云端服务器对语音特征数据和身体特征数据进行处理,云端服务器根据语音特征数据和身体特征数据得到用户情绪结果,云端服务器将得到的用户情绪结果发送至情绪管理设备,情绪管理设备接收云端服务器根据语音特征数据和身体特征数据返回的用户情绪结果。In this embodiment, after the emotion management device obtains the user's voice feature data and physical feature data, the emotion management device can send the user's voice feature data and physical feature data to the cloud server for processing; so that the cloud server can receive the voice After the feature data and physical feature data, the cloud server processes the voice feature data and physical feature data. The cloud server obtains the user's emotional results based on the voice feature data and physical feature data, and the cloud server sends the obtained user emotional results to the emotional management device. The emotion management device receives the user emotion result returned by the cloud server according to the voice feature data and the physical feature data.
本实施例通过上述方案,获取用户的语音特征数据和身体特征数据;将所述语音特征数据和所述身体特征数据发送至服务器;接收所述服务器根据所述语音特征数据和所述身体特征数据返回的用户情绪结果;根据所述用户情绪结果和预设规则对所述用户进行情绪管理。由此,实现了及时发现用户情绪变化的功能,实现了管理和调节用户情绪的功能。This embodiment obtains the voice feature data and physical feature data of the user through the above-mentioned solution; sends the voice feature data and the physical feature data to the server; receives the server according to the voice feature data and the physical feature data Returned user emotional results; emotional management of the user according to the user emotional results and preset rules. As a result, the function of discovering the user's mood changes in time is realized, and the function of managing and adjusting the user's mood is realized.
本申请还提供一种情绪管理设备。The application also provides an emotion management device.
本申请情绪管理设备包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的情绪管理程序,所述情绪管理程序被所述处理器执行时实现如上所述的情绪管理方法的步骤。The emotion management device of the present application includes: a memory, a processor, and an emotion management program stored on the memory and capable of running on the processor, and the emotion management program is executed by the processor to realize the emotions described above Steps of the management method.
其中,在所述处理器上运行的情绪管理程序被执行时所实现的方法可参照本申请情绪管理方法各个实施例,此处不再赘述。For the method implemented when the emotion management program running on the processor is executed, please refer to each embodiment of the emotion management method of the present application, which will not be repeated here.
本申请还提供一种计算机可读存储介质。The application also provides a computer-readable storage medium.
本申请计算机可读存储介质上存储有情绪管理程序,所述情绪管理程序被处理器执行时实现如上所述的情绪管理方法的步骤。An emotion management program is stored on the computer-readable storage medium of the present application, and when the emotion management program is executed by a processor, the steps of the above-mentioned emotion management method are realized.
其中,在所述处理器上运行的情绪管理程序被执行时所实现的方法可参照本申请情绪管理方法各个实施例,此处不再赘述。For the method implemented when the emotion management program running on the processor is executed, please refer to each embodiment of the emotion management method of the present application, which will not be repeated here.
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。It should be noted that in this article, the terms "include", "include" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article or system including a series of elements not only includes those elements, It also includes other elements that are not explicitly listed, or elements inherent to the process, method, article, or system. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article, or system that includes the element.
上述本申请实施例序号仅为了描述,不代表实施例的优劣。The serial numbers of the foregoing embodiments of the present application are for description only, and do not represent the advantages and disadvantages of the embodiments.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above implementation manners, those skilled in the art can clearly understand that the above-mentioned embodiment method can be implemented by means of software plus the necessary general hardware platform, of course, it can also be implemented by hardware, but in many cases the former is better.的实施方式。 Based on this understanding, the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disks, optical disks), including several instructions to make a terminal device (which can be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) execute the method described in each embodiment of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above are only the preferred embodiments of the application, and do not limit the scope of the patent for this application. Any equivalent structure or equivalent process transformation made using the content of the description and drawings of the application, or directly or indirectly applied to other related technical fields , The same reason is included in the scope of patent protection of this application.

Claims (20)

  1. 一种情绪管理方法,其中,所述情绪管理方法包括如下步骤:An emotion management method, wherein the emotion management method includes the following steps:
    获取用户的语音特征数据和身体特征数据;Obtain the user's voice feature data and physical feature data;
    对所述语音特征数据和所述身体特征数据进行处理,确定用户情绪结果;Processing the voice feature data and the physical feature data to determine the emotional result of the user;
    根据所述用户情绪结果和预设规则对所述用户进行情绪管理。Emotion management is performed on the user according to the user emotion result and preset rules.
  2. 如权利要求1所述的情绪管理方法,其中,所述根据所述用户情绪结果和预设规则对所述用户进行情绪管理的步骤,包括:5. The emotion management method according to claim 1, wherein the step of performing emotion management on the user according to the user emotion result and preset rules comprises:
    根据所述用户情绪结果和预设音乐规则对所述用户进行情绪管理。Emotion management is performed on the user according to the user emotion result and preset music rules.
  3. 如权利要求1所述的情绪管理方法,其中,所述根据所述用户情绪结果和预设规则对所述用户进行情绪管理的步骤,包括:5. The emotion management method according to claim 1, wherein the step of performing emotion management on the user according to the user emotion result and preset rules comprises:
    根据所述用户情绪结果和预设教练规则对所述用户进行情绪管理。Perform emotional management on the user according to the user emotional result and preset coaching rules.
  4. 如权利要求1所述的情绪管理方法,其中,所述根据所述用户情绪结果和预设规则对所述用户进行情绪管理的步骤,包括:5. The emotion management method according to claim 1, wherein the step of performing emotion management on the user according to the user emotion result and preset rules comprises:
    当所述用户情绪结果为预设情绪结果时,将所述用户情绪结果发送至情绪管理机构;When the user emotion result is a preset emotion result, sending the user emotion result to an emotion management agency;
    接收所述情绪管理机构根据所述用户情绪结果返回的情绪干预信息;Receiving emotional intervention information returned by the emotional management agency according to the user emotional result;
    根据情绪干预信息对所述用户进行情绪管理。Emotion management is performed on the user according to the emotional intervention information.
  5. 如权利要求1所述的情绪管理方法,其中,所述对所述语音特征数据和所述身体特征数据进行处理,确定用户情绪结果的步骤,包括:8. The emotion management method according to claim 1, wherein the step of processing the voice feature data and the physical feature data to determine the emotional result of the user comprises:
    对所述语音特征数据进行处理,得到用户语音情绪结果;Processing the voice feature data to obtain user voice emotion results;
    对所述身体特征数据进行处理,得到用户身体情绪结果;Processing the physical feature data to obtain a result of the user's physical emotions;
    根据所述用户身体情绪结果对所述用户语音情绪结果进行验证,确定用户情绪结果。The user's voice emotion result is verified according to the user's physical emotion result, and the user's emotion result is determined.
  6. 如权利要求5所述的情绪管理方法,其中,所述获取用户的语音特征数据和身体特征数据的步骤之后,包括:5. The emotion management method according to claim 5, wherein after the step of obtaining the user's voice feature data and physical feature data, the method comprises:
    获取对话人的情绪相关数据;Obtain the emotion-related data of the interlocutor;
    对所述情绪相关数据进行处理,得到对话人情绪结果;Processing the emotion-related data to obtain the emotional result of the interlocutor;
    所述根据所述用户情绪结果和预设规则对用户进行情绪管理的步骤,包括:The step of performing emotion management on the user according to the user emotion result and preset rules includes:
    根据所述用户情绪结果、所述对话人情绪结果和预设规则对用户进行情绪管理。Emotion management of the user is performed according to the user emotional result, the interlocutor's emotional result and preset rules.
  7. 如权利要求1所述的情绪管理方法,其中,所述对所述语音特征数据和所述身体特征数据进行处理,确定用户情绪结果的步骤,包括:8. The emotion management method according to claim 1, wherein the step of processing the voice feature data and the physical feature data to determine the emotional result of the user comprises:
    将所述语音特征数据和所述身体特征数据发送至服务器;Sending the voice feature data and the body feature data to a server;
    接收所述服务器根据所述语音特征数据和所述身体特征数据返回的用户情绪结果。Receiving the user's emotional result returned by the server according to the voice feature data and the physical feature data.
  8. 如权利要求1所述的情绪管理方法,其中,所述获取用户的语音特征数据和身体特征数据的步骤,之后包括:5. The emotion management method according to claim 1, wherein the step of obtaining the user's voice feature data and body feature data includes:
    对所述语音特征数据和身体特征数据进行去噪处理。Denoising processing is performed on the voice feature data and the body feature data.
  9. 一种情绪管理设备,其中,所述情绪管理设备包括:存储器、处理器及存储在所述存储器上并在所述处理器上运行的情绪管理程序,所述情绪管理程序被所述处理器执行时实现如下步骤:An emotion management device, wherein the emotion management device includes a memory, a processor, and an emotion management program stored in the memory and running on the processor, and the emotion management program is executed by the processor When implementing the following steps:
    获取用户的语音特征数据和身体特征数据;Obtain the user's voice feature data and physical feature data;
    对所述语音特征数据和所述身体特征数据进行处理,确定用户情绪结果;Processing the voice feature data and the physical feature data to determine the emotional result of the user;
    根据所述用户情绪结果和预设规则对所述用户进行情绪管理。Emotion management is performed on the user according to the user emotion result and preset rules.
  10. 如权利要求9所述的情绪管理设备,其中,所述情绪管理程序被所述处理器执行时实现如下步骤:9. The emotion management device of claim 9, wherein the following steps are implemented when the emotion management program is executed by the processor:
    根据所述用户情绪结果和预设音乐规则对所述用户进行情绪管理。Emotion management is performed on the user according to the user emotion result and preset music rules.
  11. 如权利要求9所述的情绪管理设备,其中,所述情绪管理程序被所述处理器执行时实现如下步骤:9. The emotion management device of claim 9, wherein the following steps are implemented when the emotion management program is executed by the processor:
    根据所述用户情绪结果和预设教练规则对所述用户进行情绪管理。Perform emotional management on the user according to the user emotional result and preset coaching rules.
  12. 如权利要求9所述的情绪管理设备,其中,所述情绪管理程序被所述处理器执行时实现如下步骤:9. The emotion management device of claim 9, wherein the following steps are implemented when the emotion management program is executed by the processor:
    当所述用户情绪结果为预设情绪结果时,将所述用户情绪结果发送至情绪管理机构;When the user emotion result is a preset emotion result, sending the user emotion result to an emotion management agency;
    接收所述情绪管理机构根据所述用户情绪结果返回的情绪干预信息;Receiving emotional intervention information returned by the emotional management agency according to the user emotional result;
    根据情绪干预信息对所述用户进行情绪管理。Emotion management is performed on the user according to the emotional intervention information.
  13. 如权利要求9所述的情绪管理设备,其中,所述情绪管理程序被所述处理器执行时实现如下步骤:9. The emotion management device of claim 9, wherein the following steps are implemented when the emotion management program is executed by the processor:
    对所述语音特征数据进行处理,得到用户语音情绪结果;Processing the voice feature data to obtain user voice emotion results;
    对所述身体特征数据进行处理,得到用户身体情绪结果;Processing the physical feature data to obtain a result of the user's physical emotions;
    根据所述用户身体情绪结果对所述用户语音情绪结果进行验证,确定用户情绪结果。The user's voice emotion result is verified according to the user's physical emotion result, and the user's emotion result is determined.
  14. 如权利要求13所述的情绪管理设备,其中,所述情绪管理程序被所述处理器执行时实现如下步骤:The emotion management device according to claim 13, wherein the following steps are implemented when the emotion management program is executed by the processor:
    获取对话人的情绪相关数据;Obtain the emotion-related data of the interlocutor;
    对所述情绪相关数据进行处理,得到对话人情绪结果;Processing the emotion-related data to obtain the emotional result of the interlocutor;
    所述根据所述用户情绪结果和预设规则对用户进行情绪管理的步骤,包括:The step of performing emotion management on the user according to the user emotion result and preset rules includes:
    根据所述用户情绪结果、所述对话人情绪结果和预设规则对用户进行情绪管理。Emotion management of the user is performed according to the user emotional result, the interlocutor's emotional result and preset rules.
  15. 如权利要求9所述的情绪管理设备,其中,所述情绪管理程序被所述处理器执行时实现如下步骤:9. The emotion management device of claim 9, wherein the following steps are implemented when the emotion management program is executed by the processor:
    将所述语音特征数据和所述身体特征数据发送至服务器;Sending the voice feature data and the body feature data to a server;
    接收所述服务器根据所述语音特征数据和所述身体特征数据返回的用户情绪结果。Receiving the user's emotional result returned by the server according to the voice feature data and the physical feature data.
  16. 如权利要求9所述的情绪管理设备,所述情绪管理程序被所述处理器执行时实现如下步骤:9. The emotion management device of claim 9, wherein the emotion management program is executed by the processor to implement the following steps:
    对所述语音特征数据和身体特征数据进行去噪处理。Denoising processing is performed on the voice feature data and the body feature data.
  17. 一种计算机可读存储介质,其中,所述计算机可读存储介质上存储有情绪管理程序,所述情绪管理程序被处理器执行时实现如下的步骤:A computer-readable storage medium, wherein an emotion management program is stored on the computer-readable storage medium, and when the emotion management program is executed by a processor, the following steps are implemented:
    获取用户的语音特征数据和身体特征数据;Obtain the user's voice feature data and physical feature data;
    对所述语音特征数据和所述身体特征数据进行处理,确定用户情绪结果;Processing the voice feature data and the physical feature data to determine the emotional result of the user;
    根据所述用户情绪结果和预设规则对所述用户进行情绪管理。Emotion management is performed on the user according to the user emotion result and preset rules.
  18. 如权利要求17所述的计算机可读存储介质,其中,所述情绪管理程序被处理器执行时实现如下的步骤:17. The computer-readable storage medium of claim 17, wherein the emotion management program implements the following steps when being executed by the processor:
    根据所述用户情绪结果和预设音乐规则对所述用户进行情绪管理。Emotion management is performed on the user according to the user emotion result and preset music rules.
  19. 如权利要求17所述的计算机可读存储介质,其中,所述情绪管理程序被处理器执行时实现如下的步骤:17. The computer-readable storage medium of claim 17, wherein the emotion management program implements the following steps when being executed by the processor:
    对所述语音特征数据进行处理,得到用户语音情绪结果;Processing the voice feature data to obtain user voice emotion results;
    对所述身体特征数据进行处理,得到用户身体情绪结果;Processing the physical feature data to obtain a result of the user's physical emotions;
    根据所述用户身体情绪结果对所述用户语音情绪结果进行验证,确定用户情绪结果。The user's voice emotion result is verified according to the user's physical emotion result, and the user's emotion result is determined.
  20. 如权利要求19所述的计算机可读存储介质,其中,所述情绪管理程序被处理器执行时实现如下的步骤:The computer-readable storage medium of claim 19, wherein the emotion management program implements the following steps when being executed by the processor:
    获取对话人的情绪相关数据;Obtain the emotion-related data of the interlocutor;
    对所述情绪相关数据进行处理,得到对话人情绪结果;Processing the emotion-related data to obtain the emotional result of the interlocutor;
    所述根据所述用户情绪结果和预设规则对用户进行情绪管理的步骤,包括:The step of performing emotion management on the user according to the user emotion result and preset rules includes:
    根据所述用户情绪结果、所述对话人情绪结果和预设规则对用户进行情绪管理。Emotion management of the user is performed according to the user emotional result, the interlocutor's emotional result and preset rules.
PCT/CN2019/130021 2019-12-30 2019-12-30 Emotion management method and device, and computer-readable storage medium WO2021134250A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2019/130021 WO2021134250A1 (en) 2019-12-30 2019-12-30 Emotion management method and device, and computer-readable storage medium
CN201980003396.6A CN111149172B (en) 2019-12-30 2019-12-30 Emotion management method, device and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/130021 WO2021134250A1 (en) 2019-12-30 2019-12-30 Emotion management method and device, and computer-readable storage medium

Publications (1)

Publication Number Publication Date
WO2021134250A1 true WO2021134250A1 (en) 2021-07-08

Family

ID=70525128

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/130021 WO2021134250A1 (en) 2019-12-30 2019-12-30 Emotion management method and device, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN111149172B (en)
WO (1) WO2021134250A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111724880A (en) * 2020-06-09 2020-09-29 百度在线网络技术(北京)有限公司 User emotion adjusting method, device, equipment and readable storage medium
CN112398952A (en) * 2020-12-09 2021-02-23 英华达(上海)科技有限公司 Electronic resource pushing method, system, equipment and storage medium
CN112464018A (en) * 2020-12-10 2021-03-09 山西慧虎健康科技有限公司 Intelligent emotion recognition and adjustment method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104735234A (en) * 2013-12-21 2015-06-24 陕西荣基实业有限公司 Telephone capable of measuring mood
CN206946938U (en) * 2017-01-13 2018-01-30 深圳大森智能科技有限公司 Intelligent robot Active Service System
CN108305640A (en) * 2017-01-13 2018-07-20 深圳大森智能科技有限公司 Intelligent robot active service method and device
US20180300468A1 (en) * 2016-08-15 2018-10-18 Goertek Inc. User registration method and device for smart robots
CN109803572A (en) * 2016-07-27 2019-05-24 生物说股份有限公司 For measuring and the system and method for managing physiologic emotional state

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6943794B2 (en) * 2000-06-13 2005-09-13 Minolta Co., Ltd. Communication system and communication method using animation and server as well as terminal device used therefor
CN103829958B (en) * 2014-02-19 2016-11-09 广东小天才科技有限公司 A kind of method and device monitoring people's mood
CN206470693U (en) * 2017-01-24 2017-09-05 广州幻境科技有限公司 A kind of Emotion identification system based on wearable device
CN107343095B (en) * 2017-06-30 2020-10-09 Oppo广东移动通信有限公司 Call volume control method and device, storage medium and terminal
CN108742516B (en) * 2018-03-26 2021-03-26 浙江广厦建设职业技术学院 Emotion measuring and adjusting system and method for smart home

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104735234A (en) * 2013-12-21 2015-06-24 陕西荣基实业有限公司 Telephone capable of measuring mood
CN109803572A (en) * 2016-07-27 2019-05-24 生物说股份有限公司 For measuring and the system and method for managing physiologic emotional state
US20180300468A1 (en) * 2016-08-15 2018-10-18 Goertek Inc. User registration method and device for smart robots
CN206946938U (en) * 2017-01-13 2018-01-30 深圳大森智能科技有限公司 Intelligent robot Active Service System
CN108305640A (en) * 2017-01-13 2018-07-20 深圳大森智能科技有限公司 Intelligent robot active service method and device

Also Published As

Publication number Publication date
CN111149172B (en) 2021-05-11
CN111149172A (en) 2020-05-12

Similar Documents

Publication Publication Date Title
WO2020135194A1 (en) Emotion engine technology-based voice interaction method, smart terminal, and storage medium
US20150112680A1 (en) Method for Updating Voiceprint Feature Model and Terminal
US11330321B2 (en) Method and device for adjusting video parameter based on voiceprint recognition and readable storage medium
CN111194465B (en) Audio activity tracking and summarization
WO2021134250A1 (en) Emotion management method and device, and computer-readable storage medium
US10757513B1 (en) Adjustment method of hearing auxiliary device
CN110399837A (en) User emotion recognition methods, device and computer readable storage medium
JP2012059107A (en) Emotion estimation device, emotion estimation method and program
CN109819167B (en) Image processing method and device and mobile terminal
RU2720359C1 (en) Method and equipment for recognizing emotions in speech
WO2020215590A1 (en) Intelligent shooting device and biometric recognition-based scene generation method thereof
US20180005625A1 (en) Electronic apparatus and method for controlling the electronic apparatus
CN112016367A (en) Emotion recognition system and method and electronic equipment
KR20180081922A (en) Method for response to input voice of electronic device and electronic device thereof
KR20200025532A (en) An system for emotion recognition based voice data and method for applications thereof
US20180240458A1 (en) Wearable apparatus and method for vocabulary measurement and enrichment
JP2012230535A (en) Electronic apparatus and control program for electronic apparatus
WO2019235190A1 (en) Information processing device, information processing method, program, and conversation system
TW202223804A (en) Electronic resource pushing method and system
CN110910898A (en) Voice information processing method and device
US20190138095A1 (en) Descriptive text-based input based on non-audible sensor data
CN110111795B (en) Voice processing method and terminal equipment
Tarng et al. Applications of support vector machines on smart phone systems for emotional speech recognition
CN113764099A (en) Psychological state analysis method, device, equipment and medium based on artificial intelligence
WO2020207297A1 (en) Information processing method, storage medium, and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19958150

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19958150

Country of ref document: EP

Kind code of ref document: A1