CN111430006A - Emotion adjusting method and device, computer equipment and storage medium - Google Patents

Emotion adjusting method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN111430006A
CN111430006A CN202010189363.XA CN202010189363A CN111430006A CN 111430006 A CN111430006 A CN 111430006A CN 202010189363 A CN202010189363 A CN 202010189363A CN 111430006 A CN111430006 A CN 111430006A
Authority
CN
China
Prior art keywords
emotion
value
user
current
music
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010189363.XA
Other languages
Chinese (zh)
Other versions
CN111430006B (en
Inventor
王星超
孙正隆
林天麟
徐扬生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Institute of Artificial Intelligence and Robotics
Chinese University of Hong Kong CUHK
Original Assignee
Shenzhen Institute of Artificial Intelligence and Robotics
Chinese University of Hong Kong CUHK
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Institute of Artificial Intelligence and Robotics, Chinese University of Hong Kong CUHK filed Critical Shenzhen Institute of Artificial Intelligence and Robotics
Priority to CN202010189363.XA priority Critical patent/CN111430006B/en
Publication of CN111430006A publication Critical patent/CN111430006A/en
Application granted granted Critical
Publication of CN111430006B publication Critical patent/CN111430006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M21/02Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0027Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • G11B2020/10537Audio or video recording
    • G11B2020/10546Audio or video recording specifically adapted for audio data

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Psychology (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Psychiatry (AREA)
  • Child & Adolescent Psychology (AREA)
  • Anesthesiology (AREA)
  • Developmental Disabilities (AREA)
  • Social Psychology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Multimedia (AREA)
  • Medical Informatics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Hospice & Palliative Care (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Hematology (AREA)
  • Acoustics & Sound (AREA)
  • Pain & Pain Management (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application relates to an emotion adjusting method and device, computer equipment and a storage medium. The method comprises the following steps: acquiring a current emotion value of a user and an expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system. And determining music selection parameters according to the expected emotion values and the current emotion values, and selecting corresponding target music from a preset music library according to the music selection parameters. And playing the selected target music, and acquiring an output emotion value of the user calculated by the emotion perception system based on the played target music. And when the difference between the output emotion value and the expected emotion value is greater than the preset threshold value, taking the output emotion value as the next current emotion value, returning to the step of acquiring the current emotion value of the user and the expected emotion value of the user, and repeatedly executing the steps until the difference between the output emotion value and the expected emotion value is less than or equal to the preset threshold value. By adopting the method, the emotion adjusting efficiency can be improved.

Description

Emotion adjusting method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an emotion adjusting method and apparatus, a computer device, and a storage medium.
Background
With the development of computer technology, deep learning technology has emerged, which is an intrinsic rule and expression level for learning sample data, and the information obtained in the learning process is very helpful for the interpretation of data such as characters, images and sounds. The final aim of the method is to enable the machine to have the analysis and learning capability like a human, and to recognize data such as characters, images and sounds. Deep learning has been used in daily life, such as through deep learning, a robot-assisted emotion regulation platform based on emotion calculation is established. Currently, robot-assisted emotion regulation based on emotion calculation generally adopts emotion calculation separately according to physiological data of facial expressions and sound information, so that emotion of a user is regulated based on an emotion calculation result.
However, in the current emotion adjusting method, emotion feedback is inaccurate, and the emotion of the user cannot be dynamically adjusted according to real-time change of the emotion, so that the emotion adjusting efficiency is low.
Disclosure of Invention
In view of the above, it is necessary to provide an emotion adjusting method, apparatus, computer device and storage medium capable of improving emotion adjusting efficiency.
A method of emotion modulation, the method comprising:
acquiring a current emotion value of a user and an expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system;
determining music selection parameters according to the expected emotion values and the current emotion values, and selecting corresponding target music from a preset music library according to the music selection parameters;
playing the selected target music, and acquiring an output emotion value of the user calculated by the emotion perception system based on the played target music;
and when the difference between the output emotion value and the expected emotion value is greater than a preset threshold value, taking the output emotion value as the next current emotion value, returning to the step of acquiring the current emotion value of the user and the expected emotion value of the user, and repeatedly executing the steps until the difference between the output emotion value and the expected emotion value is less than or equal to the preset threshold value.
An emotion adjusting apparatus, the apparatus comprising:
the acquisition module is used for acquiring the current emotion value of the user and the expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system;
the determining module is used for determining music selection parameters according to the expected emotion values and the current emotion values and selecting corresponding target music from a preset music library according to the music selection parameters;
the playing module is used for playing the selected target music and acquiring an output emotion value of the user, which is calculated by the emotion perception system based on the played target music;
and the returning module is used for taking the output emotion value as the next current emotion value when the difference between the output emotion value and the expected emotion value is greater than a preset threshold value, returning to the step of acquiring the current emotion value of the user and the expected emotion value of the user and repeatedly executing the steps until the difference between the output emotion value and the expected emotion value is less than or equal to the preset threshold value.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring a current emotion value of a user and an expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system;
determining music selection parameters according to the expected emotion values and the current emotion values, and selecting corresponding target music from a preset music library according to the music selection parameters;
playing the selected target music, and acquiring an output emotion value of the user calculated by the emotion perception system based on the played target music;
and when the difference between the output emotion value and the expected emotion value is greater than a preset threshold value, taking the output emotion value as the next current emotion value, returning to the step of acquiring the current emotion value of the user and the expected emotion value of the user, and repeatedly executing the steps until the difference between the output emotion value and the expected emotion value is less than or equal to the preset threshold value.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a current emotion value of a user and an expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system;
determining music selection parameters according to the expected emotion values and the current emotion values, and selecting corresponding target music from a preset music library according to the music selection parameters;
playing the selected target music, and acquiring an output emotion value of the user calculated by the emotion perception system based on the played target music;
and when the difference between the output emotion value and the expected emotion value is greater than a preset threshold value, taking the output emotion value as the next current emotion value, returning to the step of acquiring the current emotion value of the user and the expected emotion value of the user, and repeatedly executing the steps until the difference between the output emotion value and the expected emotion value is less than or equal to the preset threshold value.
According to the emotion adjusting method, device, computer equipment and storage medium, the physiological data of the user are acquired through the emotion sensing system, and the current emotion value of the user is calculated according to the physiological data. And taking the current emotion value of the user and the expected emotion value of the user as the input of an emotion adjusting system, calculating a music selection parameter by the emotion adjusting system according to the current emotion value and the expected emotion value of the user, and selecting corresponding target music according to the music selection parameter. When the user appreciates the target music, the emotion changes, the emotion sensing system monitors emotion changes of the user in real time, and an output emotion value of the user after the user appreciates the target music is calculated. In order to effectively adjust the emotion of the user based on the emotion feedback of the user so that the user can obtain the expected emotion value, the emotion adjusting system takes the output emotion value as the current emotion value of the next cycle, and the music selection parameters are calculated again and the target music is selected until the user obtains the expected emotion value. Therefore, the emotion calculation accuracy is improved, and the emotion adjusting efficiency is improved.
Drawings
FIG. 1 is a diagram of an application scenario of the emotion adjusting method in one embodiment;
FIG. 2 is a flow chart of a method of emotion modulation in an embodiment;
FIG. 3 is a schematic diagram of a process for calculating a current sentiment value according to one embodiment;
FIG. 4 is a flow diagram of an emotion adjusting platform architecture in one embodiment;
FIG. 5 is a schematic diagram of a system framework for emotion regulation in one embodiment;
FIG. 6 is a block diagram showing the construction of an emotion adjusting apparatus in one embodiment;
FIG. 7 is a block diagram showing the construction of an emotion adjusting apparatus in another embodiment;
FIG. 8 is a diagram illustrating an internal structure of a computer device according to an embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The emotion adjusting method provided by the application can be applied to the application environment shown in fig. 1. The application environment includes an emotion sensing system 102 and an emotion adjusting system 104. The emotion sensing system 102 includes a camera 1021, a pulse oximeter 1022, an EEG (Electroencephalogram) headset 1023, an emotion sensing system server 1024, and may also include a terminal, and the emotion adjusting system 104 may include a robot 1041 and an emotion adjusting system server 1042, and may also include a terminal. Emotion perception system 102 communicates with emotion modulation system 104 over a network. The terminal can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers and portable wearable devices, and the server can be implemented by an independent server or a server cluster formed by a plurality of servers.
The emotion regulating system 104 acquires the current emotion value of the user and the expected emotion value of the user; the current emotion value is calculated from the physiological data of the user by the emotion recognition system 102. The emotion adjusting system 104 determines music selection parameters according to the expected emotion values and the current emotion values, and selects corresponding target music from a preset music library according to the music selection parameters. The emotion adjusting system 104 plays the selected target music and acquires the output emotion value of the user calculated by the emotion sensing system 102 based on the played target music. When the difference between the output emotion value and the expected emotion value is greater than the preset threshold, the emotion adjusting system 104 takes the output emotion value as the next current emotion value, returns to the step of acquiring the current emotion value of the user and the expected emotion value of the user, and repeats the steps until the difference between the output emotion value and the expected emotion value is less than or equal to the preset threshold.
In one embodiment, as shown in FIG. 2, an emotion adjusting method is provided, which is exemplified by the application of the method to the emotion adjusting system 104 in FIG. 1, and comprises the following steps:
s202, acquiring a current emotion value of a user and an expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system.
The emotion is the emotional expression of the user, and the emotion can be calculated through various physiological data of the user to obtain a quantized emotion value. The emotion perception system is a system for perceiving the emotion of a user in real time and calculating an emotion value. Specifically, the emotion sensing system can collect physiological data of the user, and calculate according to the collected physiological data of the user to obtain the current emotion value of the user. The emotion adjusting system can acquire the current emotion value of the user from the emotion perception system, and can also acquire the expected emotion value of the user. The emotion value is a value obtained by quantifying the emotion of a living body.
In one embodiment, the physiological data includes at least one of facial images, joint pose data, brain wave data, and heart rate oximetry data of the user.
The human face image is an image obtained by extracting facial features of a user from a video shot by a camera, the facial features specifically comprise skin color, human face outline, human face texture, human face structure and the like, joint posture data are images obtained by tracking and extracting bone joints and hands of the user from the video shot by the camera, electroencephalogram data are electric waves generated by the brain of the user in various emotional states and collected by electroencephalogram collection equipment, for example, on an electroencephalogram, the brain can generate four types of electroencephalograms, when the person is in a stressed state, the brain generates β waves, when the body of the person is relaxed, the brain is active and the inspiration is continuous, α waves are derived, when the person feels faint, the electroencephalogram becomes theta waves, when the person falls asleep, the electroencephalogram waves and the like, heart rate data are pulse rate generated by the pulse rate of the human body and oxygen content data in blood, and the heart rate data can be changed along with the physiological emotion values of the person due to the fact that the physiological states of the human face image, the joint posture data, the electroencephalogram data, the heart rate data and the emotional data are related to the emotional data and the physiological values of the human body can be changed along with the emotional states.
Specifically, the emotion perception system may include a camera device, an EEG headset, a fingertip pulse oximeter, and the like. The method comprises the steps of shooting videos of a user through a camera, and obtaining face images and joint posture data of the user from the shot videos. For example, a user video may be captured by two RGB (Red, Green, Blue, Red Green and Blue color mode) cameras for facial feature extraction and gaze estimation to obtain a facial image. And detecting the bone joints and tracking the hand part by the other two cameras respectively to acquire joint posture data. The user's electroencephalogram data can be acquired through a 14-channel EEG headset at a sampling rate of 128 Hz. Among them, the brain wave electrode is composed of 14 channels (AF3, AF4, F7, F8, F3, F4, FC5, FC6, T7, T8, P7, P8, O1, O2). They refer to the average of two reference channels located at P3 and P4. And then the fingertip pulse oximeter is used for measuring the heart rate and blood oxygen of the user, and the reflection pulse blood oxygen saturation is a common alternative method for measuring the pulse blood oxygen saturation. Therefore, the emotion perception system can acquire various physiological data about the user, and the emotion value obtained by calculating the various physiological data is more accurate.
In one embodiment, relevant deep learning features, such as landmark points for acquiring a human face, skeleton points of a human body, basic emotion and the like, extracted from a video shot by a camera, can be processed through an open source data processing tool, such as Openface (open source face recognition), openpos (open source human posture recognition), Opensmile (open source emotion recognition) and the like, in general, the open source data processing tool is not limited in the examples of the present application, for the processing of electroencephalogram data, power spectral densities of a preset number of frequency bands (e.g., 5) can be extracted on 14 channels through short-time fourier transform, such as (1-3Hz), θ (: 4-7Hz), α (8-13Hz), β (14-30Hz) and γ (31-50Hz) specifically, and further, band-pass filtering can be performed through eeglab (denoising tool), and then the processed electroencephalogram data are converted into a frequency domain, the features related to deep learning about electroencephalograms are extracted, such as 5 frequency bands, such as a frequency domain, a low-frequency spectrum, such as a low-frequency spectrum, a blood oxygen signal, such as a heart rate, a low-blood oxygen signal, a heart rate, a blood oxygen signal, and a blood oxygen signal, can be generated through a low-oxygen spectrum, a low-blood oxygen spectrum, a low-frequency spectrum, a low-blood oxygen.
In one embodiment, as shown in fig. 3, the emotion sensing system may be divided into two parts, i.e., a feature extraction part and a regression model, and the emotion sensing system performs preprocessing on the acquired face image, joint posture data, brain wave data and heart rate blood oxygen data to extract face mark points, body skeleton points, brain wave power spectral density and heart rate blood oxygen spectral diagram. And then, the face mark points, the human body skeleton points, the brain wave power spectrum density and the heart rate blood oxygen spectrogram are used as the input of a pre-trained regression model, the regression model carries out regression processing on the input data, and finally the current emotion value is obtained, wherein the current emotion value comprises a current awakening value and a current joyful value.
And S204, determining music selection parameters according to the expected emotion values and the current emotion values, and selecting corresponding target music from a preset music library according to the music selection parameters.
Specifically, the emotion adjusting system can comprise a selection strategy, and the selection strategy can calculate parameters of corresponding music according to the expected emotion value and the current emotion value. The emotion adjusting system is also provided with a preset music database, music in the music database is stored in a mutual correlation mode with music selection parameters, and the music selection parameters control the tempo, rhythm, note, loudness, pitch, music style and the like of corresponding music. The emotion adjusting system can select corresponding target music from a preset music library according to the music selection parameters.
S206, playing the selected target music, and acquiring the output emotion value of the user calculated by the emotion perception system based on the played target music.
Specifically, the emotion adjusting system can select corresponding target music from the music library according to the music selection parameters, and the robot system in the emotion adjusting system plays the target music. After the user who is receiving emotion adjustment appreciates music, the emotion can change, the emotion sensing system can collect emotion change data of the user in real time, and an output emotion value of the user is calculated.
In one embodiment, the step of calculating the output emotion value of the user is similar to the step of calculating the current emotion value of the user, that is, physiological data of the user after the user enjoys music is collected through the emotion perception system, and the collected physiological data is calculated to obtain the output emotion value of the user.
And S208, when the difference between the output emotion value and the expected emotion value is greater than the preset threshold value, taking the output emotion value as the next current emotion value, returning to the step of acquiring the current emotion value of the user and the expected emotion value of the user, and repeatedly executing the steps until the difference between the output emotion value and the expected emotion value is less than or equal to the preset threshold value.
Specifically, the difference between the output emotion value and the expected emotion value is the key for judging whether emotion adjustment is successful or not, the emotion adjustment system can set a preset threshold for judging whether emotion adjustment is successful or not, and when the difference between the output emotion value and the expected emotion value is larger than the preset threshold, it is indicated that the user does not obtain the expected emotion value. And the emotion adjusting system takes the output emotion value as the next current emotion value, returns to the step of acquiring the current emotion value of the user and the expected emotion value of the user and repeats the steps until the difference between the output emotion value and the expected emotion value is less than or equal to a preset threshold value, which indicates that the user has obtained the expected emotion value and the emotion adjusting is successful.
In one embodiment, as shown in fig. 4, camera 1 and camera 4 may capture video for facial feature extraction and gaze estimation, camera 2 may be used to detect skeletal joints, and camera 3 may enable hand tracking. The EEG headset may acquire the user's brain wave data and the fingertip pulse oximeter may acquire the user's heart rate blood oxygen data. And sending the acquired physiological data of the users to an emotion perception system server. The emotion perception system server can perform feature extraction and fusion calculation on the physiological data of the users, and finally obtains the current emotion value of the users. The user interacts with the robot system, the emotion adjusting system server can acquire the current emotion value and the expected emotion value of the user, and selects corresponding adjusting music according to the current emotion value and the expected emotion value, so that emotion adjustment is performed on the user.
In one embodiment, as shown in FIG. 5, the emotion modulation system includes a selection strategy section, a music selection section, and a music play section. After the user starts emotion adjustment, the emotion perception system can calculate the current emotion value of the user. The selection strategy in the emotion adjusting system can acquire the current emotion value and the expected emotion value of a user, and the music selection parameters are calculated according to the current emotion value and the expected emotion value, so that corresponding music is selected from a music library according to the music selection parameters, and the music is played through a robot in the emotion adjusting system. After the user enjoys music, the emotion can be changed, and the emotion sensing system can acquire physiological data of the user in real time and calculate to obtain an updated current emotion value. And when the difference between the output emotion value of the user and the expected emotion value of the user is within a preset range, adjusting emotion completion.
In one embodiment, the replacement of music is controlled whenever the output emotion value of the user is significantly different from the desired emotion value of the user. Until the difference becomes smaller to within a preset threshold, the affective conditioning can be considered complete. Alternatively, the emotion sensing system may continue to monitor until the difference becomes less than a predetermined threshold, and if the difference is small and may remain for a period of time, the emotion modulation may be considered complete.
In the emotion adjusting method, the physiological data of the user is acquired through the emotion sensing system, and the current emotion value of the user is calculated according to the physiological data. And taking the current emotion value of the user and the expected emotion value of the user as the input of an emotion adjusting system, calculating a music selection parameter by the emotion adjusting system according to the current emotion value and the expected emotion value of the user, and selecting corresponding target music according to the music selection parameter. When the user appreciates the target music, the emotion changes, the emotion sensing system monitors emotion changes of the user in real time, and an output emotion value of the user after the user appreciates the target music is calculated. In order to effectively adjust the emotion of the user based on the emotion feedback of the user so that the user can obtain the expected emotion value, the emotion adjusting system takes the output emotion value as the current emotion value of the next cycle, and the music selection parameters are calculated again and the target music is selected until the user obtains the expected emotion value. Therefore, the emotion calculation accuracy is improved, and the emotion adjusting efficiency is improved.
In one embodiment, the emotion adjusting method further comprises: when the difference between the output emotion value and the expected emotion value is smaller than or equal to a preset threshold value, continuously acquiring the physiological data of the user through the emotion sensing system, and updating the output emotion value based on the continuously acquired physiological data until the difference between the output emotion value output by the emotion sensing system in a preset time period and the expected emotion value is smaller than or equal to the preset threshold value, so that the emotion adjustment of the user is completed.
Specifically, emotion adjustment is performed in a closed loop, and an adjustment strategy can be further adjusted according to emotion feedback of the user, so that the user can obtain an expected emotion value. The emotion adjusting system can compare the output emotion value of the user with the expected emotion value of the user, when the difference between the output emotion value and the expected emotion value is smaller than or equal to a preset threshold value, the physiological data of the user can be continuously acquired through the emotion sensing system, and the output emotion value is updated based on the continuously acquired physiological data, so that the emotion value of the user is prevented from being greatly fluctuated to influence the emotion adjusting effect. When the difference between the output emotion value output by the emotion sensing system in the preset time period and the expected emotion value is smaller than or equal to the preset threshold value, the emotion sensing system stops acquiring the physiological data of the user, and the emotion adjustment of the user is completed at the moment.
In the above embodiment, by determining the difference between the output emotion value of the user and the expected emotion value, when the difference is small, the output emotion value is continuously updated, and when the small difference can be maintained for a period of time, it is indicated that the emotion adjusted by the user is relatively stable, and the emotion adjustment is completed, so that the emotion adjustment is more stable and accurate.
In one embodiment, the step of calculating the current emotion value of the user specifically includes: performing feature extraction processing on at least one of a face image, joint posture data, electroencephalogram data and heart rate blood oxygen data of a user through an emotion sensing system to obtain a corresponding feature extraction result; calculating to obtain the current emotion value of the user according to the feature extraction result through a pre-trained regression model in the emotion perception system; and acquiring the current emotion value of the user from the emotion perception system.
Specifically, the emotion adjusting system comprises a multi-mode emotion calculating system and a pre-trained regression model, the multi-mode emotion calculating system can acquire at least one of face images, joint posture data, electroencephalogram data and heart rate blood oxygen data of a user, and can perform data processing, namely feature extraction processing, on the acquired at least one of the face images, the joint posture data, the electroencephalogram data and the heart rate blood oxygen data of the user in a preprocessing layer to obtain a corresponding feature extraction result. And then, fusion calculation is carried out on the feature extraction result through a regression model in the emotion regulating system, and the current emotion value of the user is obtained. Furthermore, the emotion regulating system can acquire the current emotion value of the user from the emotion perception system.
In the embodiment, at least one of the face image, the joint posture data, the electroencephalogram data and the heart rate blood oxygen data of the user is subjected to fusion calculation to obtain the current emotion value of the user, so that the physiological data of the user is enriched, the physiological data of the user is prevented from being single, and the calculation of the current emotion value of the user is more accurate.
In one embodiment, the current sentiment value includes a current wake value and a current pleasure value, the desired sentiment value includes a desired wake value and a desired pleasure value, and the music selection parameter includes a music structure parameter and a sentiment error parameter. The step of determining music selection parameters according to the expected emotion value and the current emotion value specifically comprises the following steps: determining a music structure parameter according to an expected awakening value and an expected pleasure value in the expected emotion value; acquiring previous sentiment values of the last time; the previous sentiment value comprises a previous wake value and a previous pleasure value; an emotion error parameter is determined based on a difference between the current wake value and the previous wake value and a difference between the current pleasure value and the previous pleasure value.
Wherein the arousal value is a mood activation degree of the user, and the pleasure value is a mood pleasure degree of the user. The music structure parameter is a parameter of the relationship between the melody of the music itself and the desired arousal value and the desired pleasure value. For example, the music structure parameter may be tempo, rhythm, note, loudness, pitch, music style, and the like.
Specifically, the music structure parameter is functionally related to the expected emotion value of the user, and the selection strategy in the emotion regulating system can calculate the music structure parameter according to the expected arousal value and the expected pleasure value in the expected emotion value of the user. The emotion error parameter is related to the previous emotion value and the current emotion value, and the selection strategy in the emotion adjusting system can acquire the previous awakening value and the previous pleasure value in the previous emotion value calculated in the last cycle and calculate the emotion error parameter according to the difference between the current awakening value and the previous awakening value and the difference between the current pleasure value and the previous pleasure value.
In one embodiment, the music structure parameter may be expressed as:
Tempo:note=0.3-arod*0.15 (1)
Rhythm:p(note=1)=arod/5 (2)
Loudness:note=unif{10,20*arod+12} (3)
Pitch:p(C3)=0.8*vald(4)
Mode=10-(2*vald) (5)
wherein, arodIndicating the desired degree of arousal, valdRepresenting a desired degree of pleasure, Tempo representing a Tempo, note representing a note, L outness representing loudness, Pitch representing Pitch, C3 representing a progression of bass tones, p (C3) representing a probability of bass occurrence, Unif { } representing a normalization process, Mode representing a music style in one embodiment, equation (1) representing that the Tempo is functionally related to arousalThe machine sets the number of notes played in a bar. This parameter is called tempo coarseness because it controls the number of notes played, more notes leading to a more complex tempo. The velocity (loudness) of each note is uniformly randomly set within a range of loudness, which is then determined by the relative loudness parameters of the subsequent tones. Equation (3) indicates that relative loudness is functionally related to arousal. Equation (4) indicates that the pitch is functionally related to the degree of pleasure, and in particular, the corresponding pitch can be selected according to the probability of occurrence of C3. Equation (5) indicates that the style of music is also related to the degree of pleasure.
In the embodiment, the emotion of the user can be adjusted in real time according to the emotion feedback of the user by calculating the music structure parameter and the emotion error parameter and selecting the next played music based on the music structure parameter and the emotion error parameter, so that the emotion adjusting efficiency is further improved.
In one embodiment, the step of selecting corresponding target music from a preset music library according to the music selection parameter specifically includes: acquiring previous music selection parameters and current music selection parameters of the current time, and comparing the previous music selection parameters with the current music selection parameters; when the difference between the previous music selection parameter and the current music selection parameter is larger than a preset threshold value, directly selecting corresponding target music from a preset music library according to the current music selection parameter; and when the difference between the previous music selection parameter and the current music selection parameter is less than or equal to a preset threshold value, the current music selection parameter is changed through the noise parameter, and the corresponding target music is selected from a preset music library according to the changed current music selection parameter.
Specifically, when the user does not obtain the desired emotion value, the emotion adjusting system and the emotion sensing system can continue to operate in a loop until the user obtains the desired emotion value. In each circulation, the selection strategy in the emotion adjusting system can calculate the corresponding music selection parameters. The emotion adjusting system can obtain previous music selection parameters calculated in the last cycle and current music selection parameters calculated in the current cycle, preset a preset threshold value of difference, and can compare the previous music selection parameters with the current music selection parameters. And when the difference between the previous music selection parameter and the current music selection parameter is larger than a preset threshold value, directly selecting corresponding target music from a preset music library according to the current music selection parameter. When the difference between the previous music selection parameter and the current music selection parameter is less than or equal to a preset threshold value, in order to avoid that the music parameters calculated in each cycle are the same or similar, a selection strategy in the emotion adjusting system can acquire a noise parameter, so that the current music selection parameter can be changed through the noise parameter, and corresponding target music is selected from a preset music library according to the changed current music selection parameter.
In the above embodiment, by comparing the previous music selection parameter with the current music selection parameter, when the previous music selection parameter is similar to the current music selection parameter, the noise parameter is added to change the current music parameter, so that it is ensured that the adjustment music played each time is different, the curiosity of the user is maintained, and fatigue caused by similar music is avoided.
It should be understood that although the various steps of fig. 2 are shown in order, the steps are not necessarily performed in order. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
In one embodiment, as shown in FIG. 6, there is provided an emotion adjusting apparatus 600, including: an obtaining module 601, a determining module 602, a playing module 603 and a returning module 604, wherein:
an obtaining module 601, configured to obtain a current emotion value of a user and an expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system.
The determining module 602 is configured to determine a music selection parameter according to the expected emotion value and the current emotion value, and select corresponding target music from a preset music library according to the music selection parameter.
And a playing module 603, configured to play the selected target music, and obtain an output emotion value of the user calculated by the emotion sensing system based on the played target music.
And a returning module 604, configured to, when a difference between the output emotion value and the expected emotion value is greater than a preset threshold, take the output emotion value as a next current emotion value, return to the step of acquiring the current emotion value of the user and the expected emotion value of the user, and repeat the step until the difference between the output emotion value and the expected emotion value is less than or equal to the preset threshold.
In one embodiment, the obtaining module 601 is further configured to perform feature extraction processing on at least one of a face image, joint posture data, electroencephalogram data, and heart rate blood oxygen data of a user through an emotion sensing system to obtain a corresponding feature extraction result; calculating to obtain the current emotion value of the user according to the feature extraction result through a pre-trained regression model in the emotion perception system; and acquiring the current emotion value of the user from the emotion perception system.
In one embodiment, the determining module 602 is further configured to determine a music structure parameter according to the desired arousal value and the desired pleasure value in the desired emotion values; acquiring previous sentiment values of the last time; the previous sentiment value comprises a previous wake value and a previous pleasure value; an emotion error parameter is determined based on a difference between the current wake value and the previous wake value and a difference between the current pleasure value and the previous pleasure value.
In one embodiment, the determining module 602 is further configured to obtain a previous music selection parameter of the last time and a current music selection parameter of the current time, and compare the previous music selection parameter with the current music selection parameter; when the difference between the previous music selection parameter and the current music selection parameter is larger than a preset threshold value, directly selecting corresponding target music from a preset music library according to the current music selection parameter; and when the difference between the previous music selection parameter and the current music selection parameter is less than or equal to a preset threshold value, the current music selection parameter is changed through the noise parameter, and the corresponding target music is selected from a preset music library according to the changed current music selection parameter.
Referring to FIG. 7, in one embodiment, emotion adjusting apparatus 600 further comprises an acquisition module 605, wherein:
and the acquiring module 605 is configured to, when the difference between the output emotion value and the expected emotion value is smaller than or equal to a preset threshold, continue to acquire the physiological data of the user through the emotion sensing system, and update the output emotion value based on the continuously acquired physiological data until the difference between the output emotion value output by the emotion sensing system within a preset time period and the expected emotion value is smaller than or equal to the preset threshold, so as to complete emotion adjustment of the user.
The emotion adjusting device acquires the physiological data of the user through the emotion sensing system and calculates the current emotion value of the user according to the physiological data. And taking the current emotion value of the user and the expected emotion value of the user as the input of an emotion adjusting system, calculating a music selection parameter by the emotion adjusting system according to the current emotion value and the expected emotion value of the user, and selecting corresponding target music according to the music selection parameter. When the user appreciates the target music, the emotion changes, the emotion sensing system monitors emotion changes of the user in real time, and an output emotion value of the user after the user appreciates the target music is calculated. In order to effectively adjust the emotion of the user based on the emotion feedback of the user so that the user can obtain the expected emotion value, the emotion adjusting system takes the output emotion value as the current emotion value of the next cycle, and the music selection parameters are calculated again and the target music is selected until the user obtains the expected emotion value. Therefore, the emotion calculation accuracy is improved, and the emotion adjusting efficiency is improved.
For specific limitations of the emotion adjusting device, reference may be made to the above limitations of the emotion adjusting method, which are not described herein again. The modules in the emotion adjusting device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, a computer device is provided, which may be a server, and its internal structure diagram may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing affective conditioning data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of emotion adjustment.
Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of the method of emotion modulation described above. The steps of the emotion adjusting method herein may be the steps in the emotion adjusting method of each of the above embodiments.
In one embodiment, a computer readable storage medium is provided, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the emotion adjusting method described above. The steps of the emotion adjusting method herein may be the steps in the emotion adjusting method of each of the above embodiments.
It will be understood by those of ordinary skill in the art that all or a portion of the processes of the methods of the embodiments described above may be implemented by a computer program that may be stored on a non-volatile computer-readable storage medium, which when executed, may include the processes of the embodiments of the methods described above, wherein any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A method of emotion modulation, the method comprising:
acquiring a current emotion value of a user and an expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system;
determining music selection parameters according to the expected emotion values and the current emotion values, and selecting corresponding target music from a preset music library according to the music selection parameters;
playing the selected target music, and acquiring an output emotion value of the user calculated by the emotion perception system based on the played target music;
and when the difference between the output emotion value and the expected emotion value is greater than a preset threshold value, taking the output emotion value as the next current emotion value, returning to the step of acquiring the current emotion value of the user and the expected emotion value of the user, and repeatedly executing the steps until the difference between the output emotion value and the expected emotion value is less than or equal to the preset threshold value.
2. The method of claim 1, further comprising:
when the difference between the output emotion value and the expected emotion value is smaller than or equal to a preset threshold value, continuously acquiring the physiological data of the user through the emotion sensing system, and updating the output emotion value based on the continuously acquired physiological data until the difference between the output emotion value output by the emotion sensing system in a preset time period and the expected emotion value is smaller than or equal to the preset threshold value, so as to finish emotion adjustment of the user.
3. The method of claim 1, wherein the physiological data comprises at least one of facial images, joint pose data, brain wave data, and heart rate oximetry data of the user.
4. The method of claim 3, wherein the step of calculating the current sentiment value of the user comprises:
performing feature extraction processing on at least one of a face image, joint posture data, electroencephalogram data and heart rate blood oxygen data of a user through the emotion sensing system to obtain a corresponding feature extraction result;
calculating to obtain the current emotion value of the user according to the feature extraction result through a pre-trained regression model in the emotion perception system;
and acquiring the current emotion value of the user from the emotion perception system.
5. The method of claim 1, wherein the current mood value comprises a current wake value and a current pleasure value, wherein the desired mood value comprises a desired wake value and a desired pleasure value, wherein the music selection parameter comprises a music structure parameter and a mood error parameter, and wherein determining the music selection parameter based on the desired mood value and the current mood value comprises:
determining a music structure parameter according to an expected awakening value and an expected pleasure value in the expected emotion value;
acquiring previous sentiment values of the last time; the previous sentiment value comprises a previous wake value and a previous pleasure value;
and determining an emotional error parameter according to the difference between the current awakening value and the previous awakening value and the difference between the current pleasure value and the previous pleasure value.
6. The method according to any one of claims 1 to 5, wherein the selecting the corresponding target music from a preset music library according to the music selection parameter comprises:
acquiring previous music selection parameters and current music selection parameters of the current time, and comparing the previous music selection parameters with the current music selection parameters;
when the difference between the previous music selection parameter and the current music selection parameter is larger than a preset threshold value, directly selecting corresponding target music from a preset music library according to the current music selection parameter;
and when the difference between the previous music selection parameter and the current music selection parameter is less than or equal to a preset threshold value, changing the current music selection parameter through a noise parameter, and selecting corresponding target music from a preset music library according to the changed current music selection parameter.
7. An emotion adjusting apparatus, comprising:
the acquisition module is used for acquiring the current emotion value of the user and the expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system;
the determining module is used for determining music selection parameters according to the expected emotion values and the current emotion values and selecting corresponding target music from a preset music library according to the music selection parameters;
the playing module is used for playing the selected target music and acquiring an output emotion value of the user, which is calculated by the emotion perception system based on the played target music;
and the returning module is used for taking the output emotion value as the next current emotion value when the difference between the output emotion value and the expected emotion value is greater than a preset threshold value, returning to the step of acquiring the current emotion value of the user and the expected emotion value of the user and repeatedly executing the steps until the difference between the output emotion value and the expected emotion value is less than or equal to the preset threshold value.
8. The apparatus of claim 7, further comprising:
and the acquisition module is used for continuously acquiring the physiological data of the user through the emotion perception system when the difference between the output emotion value and the expected emotion value is smaller than or equal to a preset threshold value, and updating the output emotion value based on the continuously acquired physiological data until the difference between the output emotion value output by the emotion perception system in a preset time period and the expected emotion value is smaller than or equal to the preset threshold value, so that the emotion adjustment of the user is completed.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of claims 1 to 6 are implemented by the processor when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 6.
CN202010189363.XA 2020-03-18 2020-03-18 Emotion adjustment method, emotion adjustment device, computer equipment and storage medium Active CN111430006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010189363.XA CN111430006B (en) 2020-03-18 2020-03-18 Emotion adjustment method, emotion adjustment device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010189363.XA CN111430006B (en) 2020-03-18 2020-03-18 Emotion adjustment method, emotion adjustment device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111430006A true CN111430006A (en) 2020-07-17
CN111430006B CN111430006B (en) 2023-09-19

Family

ID=71549615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010189363.XA Active CN111430006B (en) 2020-03-18 2020-03-18 Emotion adjustment method, emotion adjustment device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111430006B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301185A (en) * 2016-04-15 2017-10-27 富泰华工业(深圳)有限公司 Music commending system and method
CN109582821A (en) * 2018-11-27 2019-04-05 努比亚技术有限公司 A kind of music object recommendation method, terminal and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301185A (en) * 2016-04-15 2017-10-27 富泰华工业(深圳)有限公司 Music commending system and method
CN109582821A (en) * 2018-11-27 2019-04-05 努比亚技术有限公司 A kind of music object recommendation method, terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN111430006B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
US11696714B2 (en) System and method for brain modelling
Panicker et al. A survey of machine learning techniques in physiology based mental stress detection systems
US10606353B2 (en) Systems and methods for collecting, analyzing, and sharing bio-signal and non-bio-signal data
US10423893B2 (en) Adaptive interface for screen-based interactions
US20200057661A1 (en) Adaptive interface for screen-based interactions
Liaqat et al. WearBreathing: Real world respiratory rate monitoring using smartwatches
CN110193127B (en) Music sleep assisting method and device, computer equipment and storage medium
US9983670B2 (en) Systems and methods for collecting, analyzing, and sharing bio-signal and non-bio-signal data
KR20210045467A (en) Electronic device for recognition of mental behavioral properties based on deep neural networks
EP3498169B1 (en) System and method for classification and quantitative estimation of cognitive stress
US20230259208A1 (en) Interactive electronic content delivery in coordination with rapid decoding of brain activity
Godin et al. Selection of the most relevant physiological features for classifying emotion
CN109859570A (en) A kind of brain training method and system
CN111797817A (en) Emotion recognition method and device, computer equipment and computer-readable storage medium
CN108721048A (en) Rehabilitation training control method, computer readable storage medium and terminal
US20210343389A1 (en) Systems and methods of pain treatment
CN107184205B (en) Automatic knowledge memory traction method based on memory scale and induction capture of brain
Tiwari et al. Classification of physiological signals for emotion recognition using IoT
CN117883082A (en) Abnormal emotion recognition method, system, equipment and medium
WO2021061699A1 (en) Adaptive interface for screen-based interactions
CN111430006B (en) Emotion adjustment method, emotion adjustment device, computer equipment and storage medium
CN117539356B (en) Meditation-based interactive user emotion perception method and system
Zhu Emotion Detection System Using Electrodermal Activity Signals from Wearable Devices with Deep Learning Techniques
Soares et al. Probabilistic Modeling of Interpersonal Coordination Processes
TWI845959B (en) Emotion estimation device, portable terminal, portable emotion estimation device, and program for controlling emotion estimation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant