CN111430006B - Emotion adjustment method, emotion adjustment device, computer equipment and storage medium - Google Patents

Emotion adjustment method, emotion adjustment device, computer equipment and storage medium Download PDF

Info

Publication number
CN111430006B
CN111430006B CN202010189363.XA CN202010189363A CN111430006B CN 111430006 B CN111430006 B CN 111430006B CN 202010189363 A CN202010189363 A CN 202010189363A CN 111430006 B CN111430006 B CN 111430006B
Authority
CN
China
Prior art keywords
emotion
value
user
current
expected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010189363.XA
Other languages
Chinese (zh)
Other versions
CN111430006A (en
Inventor
王星超
孙正隆
林天麟
徐扬生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chinese University of Hong Kong Shenzhen
Shenzhen Institute of Artificial Intelligence and Robotics
Original Assignee
Chinese University of Hong Kong Shenzhen
Shenzhen Institute of Artificial Intelligence and Robotics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chinese University of Hong Kong Shenzhen, Shenzhen Institute of Artificial Intelligence and Robotics filed Critical Chinese University of Hong Kong Shenzhen
Priority to CN202010189363.XA priority Critical patent/CN111430006B/en
Publication of CN111430006A publication Critical patent/CN111430006A/en
Application granted granted Critical
Publication of CN111430006B publication Critical patent/CN111430006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M21/02Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis for inducing sleep or relaxation, e.g. by direct nerve stimulation, hypnosis, analgesia
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61MDEVICES FOR INTRODUCING MEDIA INTO, OR ONTO, THE BODY; DEVICES FOR TRANSDUCING BODY MEDIA OR FOR TAKING MEDIA FROM THE BODY; DEVICES FOR PRODUCING OR ENDING SLEEP OR STUPOR
    • A61M21/00Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
    • A61M2021/0005Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus
    • A61M2021/0027Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis by the use of a particular sense, or stimulus by the hearing sense
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • G11B20/10527Audio or video recording; Data buffering arrangements
    • G11B2020/10537Audio or video recording
    • G11B2020/10546Audio or video recording specifically adapted for audio data

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Anesthesiology (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Veterinary Medicine (AREA)
  • Child & Adolescent Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Developmental Disabilities (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Educational Technology (AREA)
  • Primary Health Care (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Epidemiology (AREA)
  • Signal Processing (AREA)
  • Pathology (AREA)
  • Pain & Pain Management (AREA)
  • Acoustics & Sound (AREA)
  • Hematology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application relates to an emotion adjustment method, an emotion adjustment device, computer equipment and a storage medium. The method comprises the following steps: acquiring a current emotion value of a user and an expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system. And determining a music selection parameter according to the expected emotion value and the current emotion value, and selecting corresponding target music from a preset music library according to the music selection parameter. And playing the selected target music, and acquiring an output emotion value of the user calculated by the emotion perception system based on the played target music. When the difference between the output emotion value and the expected emotion value is larger than a preset threshold, taking the output emotion value as the current emotion value of the next time, returning to the step of obtaining the current emotion value of the user and the expected emotion value of the user, and repeating the step until the difference between the output emotion value and the expected emotion value is smaller than or equal to the preset threshold. By adopting the method, the emotion regulating efficiency can be improved.

Description

Emotion adjustment method, emotion adjustment device, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an emotion adjustment method, an emotion adjustment device, a computer device, and a storage medium.
Background
With the development of computer technology, deep learning technology has emerged, which is an inherent rule and expression hierarchy of learning sample data, and information obtained in these learning processes greatly helps interpretation of data such as text, images and sounds. Its final goal is to have the machine have analytical learning capabilities like a person, and to recognize text, image, and sound data. Deep learning has been used in daily life, such as by deep learning, to build a robot-assisted emotion adjustment platform based on emotion calculation. At present, robot-assisted emotion adjustment based on emotion calculation generally adopts emotion calculation based on facial expression and voice information physiological data alone, so that emotion of a user is adjusted based on emotion calculation results.
However, the current emotion adjustment method is inaccurate in emotion feedback, and the emotion of the user cannot be dynamically adjusted according to real-time change of the emotion, so that the emotion adjustment efficiency is low.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an emotion adjustment method, an emotion adjustment device, a computer apparatus, and a storage medium that can improve emotion adjustment efficiency.
A method of emotion adjustment, the method comprising:
acquiring a current emotion value of a user and an expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system;
determining a music selection parameter according to the expected emotion value and the current emotion value, and selecting corresponding target music from a preset music library according to the music selection parameter;
playing the selected target music, and acquiring an output emotion value of the user, which is calculated by the emotion perception system based on the played target music;
and when the difference between the output emotion value and the expected emotion value is larger than a preset threshold, taking the output emotion value as the current emotion value of the next time, returning to the step of acquiring the current emotion value of the user and the expected emotion value of the user, and repeating the execution until the difference between the output emotion value and the expected emotion value is smaller than or equal to the preset threshold.
An emotion adjustment device, the device comprising:
the acquisition module is used for acquiring the current emotion value of the user and the expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system;
The determining module is used for determining a music selection parameter according to the expected emotion value and the current emotion value and selecting corresponding target music from a preset music library according to the music selection parameter;
the playing module is used for playing the selected target music and acquiring an output emotion value of the user, which is calculated by the emotion perception system based on the played target music;
and the return module is used for taking the output emotion value as the current emotion value of the next time when the difference between the output emotion value and the expected emotion value is larger than a preset threshold value, and returning to the step of acquiring the current emotion value of the user and the expected emotion value of the user and repeating the execution until the difference between the output emotion value and the expected emotion value is smaller than or equal to the preset threshold value.
A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of:
acquiring a current emotion value of a user and an expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system;
Determining a music selection parameter according to the expected emotion value and the current emotion value, and selecting corresponding target music from a preset music library according to the music selection parameter;
playing the selected target music, and acquiring an output emotion value of the user, which is calculated by the emotion perception system based on the played target music;
and when the difference between the output emotion value and the expected emotion value is larger than a preset threshold, taking the output emotion value as the current emotion value of the next time, returning to the step of acquiring the current emotion value of the user and the expected emotion value of the user, and repeating the execution until the difference between the output emotion value and the expected emotion value is smaller than or equal to the preset threshold.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring a current emotion value of a user and an expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system;
determining a music selection parameter according to the expected emotion value and the current emotion value, and selecting corresponding target music from a preset music library according to the music selection parameter;
Playing the selected target music, and acquiring an output emotion value of the user, which is calculated by the emotion perception system based on the played target music;
and when the difference between the output emotion value and the expected emotion value is larger than a preset threshold, taking the output emotion value as the current emotion value of the next time, returning to the step of acquiring the current emotion value of the user and the expected emotion value of the user, and repeating the execution until the difference between the output emotion value and the expected emotion value is smaller than or equal to the preset threshold.
According to the emotion regulating method, the emotion regulating device, the computer equipment and the storage medium, physiological data of the user are obtained through the emotion perception system, and the current emotion value of the user is calculated according to the physiological data. And taking the current emotion value of the user and the expected emotion value of the user as inputs of an emotion regulating system, calculating a music selection parameter by the emotion regulating system according to the current emotion value and the expected emotion value of the user, and selecting corresponding target music according to the music selection parameter. When a user enjoys target music, emotion can change, and the emotion sensing system monitors emotion change of the user in real time and calculates an output emotion value of the user after enjoying the target music. In order to effectively adjust the emotion of the user based on the emotion feedback of the user so that the user obtains a desired emotion value, the emotion regulating system takes the output emotion value as the current emotion value of the next cycle, calculates the music selection parameters again and selects target music until the user obtains the desired emotion value. Thus, the accuracy of emotion calculation is improved, and the emotion adjustment efficiency is improved.
Drawings
FIG. 1 is an application scenario diagram of a emotion adjustment method in one embodiment;
FIG. 2 is a schematic flow chart of an emotion adjustment method in one embodiment;
FIG. 3 is a schematic flow chart of calculating a current emotion value in one embodiment;
FIG. 4 is a schematic flow diagram of an emotion adjustment platform architecture in one embodiment;
FIG. 5 is a schematic diagram of an emotion adjustment system framework in one embodiment;
FIG. 6 is a block diagram of an emotion adjustment device in one embodiment;
FIG. 7 is a block diagram showing the structure of an emotion adjustment device according to another embodiment;
fig. 8 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The emotion regulating method provided by the application can be applied to an application environment shown in figure 1. The application environment includes emotion awareness system 102 and emotion adjustment system 104. Emotion sensing system 102 includes camera 1021, pulse oximeter 1022, EEG (Electroencephalogram) headset 1023, emotion sensing system server 1024, and may also include a terminal, etc., and emotion adjustment system 104 may include robot 1041, emotion adjustment system server 1042, and may also include a terminal, etc. Emotion awareness system 102 communicates with emotion adjustment system 104 via a network. The terminal may be, but not limited to, various personal computers, notebook computers, smartphones, tablet computers and portable wearable devices, and the server may be implemented by a separate server or a server cluster formed by a plurality of servers.
The emotion regulating system 104 acquires a current emotion value of a user and an expected emotion value of the user; the current emotion value is calculated by emotion perception system 102 on the physiological data of the user. Emotion adjustment system 104 determines a music selection parameter based on the desired emotion value and the current emotion value, and selects corresponding target music from a preset music library based on the music selection parameter. Emotion adjustment system 104 plays the selected target music and obtains the output emotion value of the user calculated by emotion perception system 102 based on the played target music. When the difference between the output emotion value and the expected emotion value is greater than the preset threshold, the emotion regulating system 104 takes the output emotion value as the current emotion value of the next time, returns to the step of acquiring the current emotion value of the user and the expected emotion value of the user, and repeatedly executes the steps until the difference between the output emotion value and the expected emotion value is smaller than or equal to the preset threshold.
In one embodiment, as shown in fig. 2, a emotion adjustment method is provided, and the emotion adjustment system 104 in fig. 1 is taken as an example for explaining the method, and the method includes the following steps:
s202, acquiring a current emotion value of a user and an expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system.
The emotion is the emotion expression of the user, and the emotion can be calculated through various physiological data of the user to obtain quantized emotion values. The emotion perception system is a system for perceiving emotion of a user in real time and calculating emotion values. Specifically, the emotion perception system can collect physiological data of the user, and calculate according to the collected physiological data of the user to obtain a current emotion value of the user. The emotion regulating system can acquire the current emotion value of the user from the emotion sensing system, and the emotion regulating system can also acquire the expected emotion value of the user. The emotion value is a value obtained by quantifying the emotion of a living organism.
In one embodiment, the physiological data includes at least one of facial images, joint pose data, brain wave data, and heart rate blood oxygen data of the user.
The facial image is an image obtained by extracting facial features of a user from video shot by a camera, and the facial features can specifically comprise skin color, facial contour, facial texture, facial structure and the like. The joint posture data is an image obtained by tracking and extracting skeletal joints and hands of a user from a video captured by a camera. The brain wave data are the electric waves generated by the brain of the user in various emotion states, for example, on an electroencephalogram, four brain waves can be generated by the brain, and when the person is in a tension state, beta waves are generated by the brain; when the human body is relaxed and the brain is active, the inspiration is continuous, and alpha waves are derived; when a person feels drowsiness and is hazy, brain waves become theta waves; when a person goes to deep sleep, delta waves and the like are changed. Heart rate blood oxygen data is the pulse rate produced by the pulse of the human body and the oxygen content data in the blood. Since physiological data such as face images, joint posture data, brain wave data, heart rate blood oxygen data and the like are related to the emotional state of a person and can be changed along with the change of the emotional state of the person, the physiological data can be used for quantifying the emotion of the user into an emotion value.
In particular, emotion perception systems may include camera devices, EEG headphones, fingertip pulse oximeters, and the like. The video shooting can be carried out on the user through the camera, and the face image and the joint gesture data of the user are obtained from the shot video. For example, user video may be captured by two RGB (Red, green, blue, red green blue color mode) cameras for facial feature extraction and gaze estimation to obtain face images. The other two cameras are used for respectively detecting the skeletal joints and realizing hand tracking so as to acquire joint posture data. The brain wave data of the user can be acquired by a 14-channel EEG headset at a sampling rate of 128 Hz. Wherein, the brain wave electrode is composed of 14 channels (AF 3, AF4, F7, F8, F3, F4, FC5, FC6, T7, T8, P7, P8, O1, O2). They refer to the average of two reference channels at P3 and P4. And then the heart rate and the blood oxygen of the user are measured by the fingertip pulse oximeter, and the reflection pulse blood oxygen saturation is a common alternative method for measuring the pulse blood oxygen saturation. Therefore, the emotion perception system can acquire various physiological data about the user, and the emotion value obtained by calculating the various physiological data is more accurate.
In one embodiment, relevant deep learning features, such as features of obtaining a mark point of a face, a skeleton point of a human body, basic emotion and the like, are extracted from a video shot by a camera, and may be generally processed by an open source data processing tool, where the open source data processing tool may specifically be Openface (open source face recognition), openpost (open source human body gesture recognition), opensmile (open source emotion recognition) and the like. The examples of the present application are not limited to open source data processing tools. For the processing process of brain wave data, the power spectral densities of preset frequency bands (for example, 5) can be extracted on 14 channels through short-time Fourier transformation, and the power spectral densities can comprise delta (1-3 Hz), theta (4-7 Hz), alpha (8-13 Hz), beta (14-30 Hz) and gamma (31-50 Hz). Further, band-pass filtering may be performed by eeglab (brain wave denoising tool). And then converting the processed brain wave data into a frequency domain, and extracting deep learning characteristics of brain waves, such as brain wave power spectrum density maps of 5 frequency bands after frequency domain processing. For the processing process of the heart rate blood oxygen data, the time series data can be converted into an image representation based on a frequency domain, and deep learning characteristics of the heart rate blood oxygen, such as a spectrogram of the heart rate blood oxygen, are extracted. Since the frequency range of the signal is very low. Thus, a spectrum can be generated in the frequency range of 0-5 hz.
In one embodiment, as shown in fig. 3, the emotion perception system may be divided into two parts of feature extraction and regression model, and the emotion perception system processes the collected face image, joint gesture data, brain wave data and heart rate blood oxygen data through a preprocessing layer to extract face mark points, human skeleton points, brain wave power spectral density and heart rate blood oxygen spectrogram. And then, taking the face mark points, the human skeleton points, the brain wave power spectral density and the heart rate blood oxygen spectrogram as inputs of a pre-trained regression model, and carrying out regression processing on the input data by the regression model to finally obtain a current emotion value which comprises a current awakening value and a current pleasure value.
S204, determining a music selection parameter according to the expected emotion value and the current emotion value, and selecting corresponding target music from a preset music library according to the music selection parameter.
Specifically, the emotion adjustment system may include a selection policy that may calculate parameters of the corresponding music based on the desired emotion value and the current emotion value. The emotion regulating system is also provided with a preset music database, music in the music database is stored in a correlated manner with music selection parameters, and the music selection parameters control the beats, the rhythms, the notes, the loudness, the pitch, the music style and the like of the corresponding music. The emotion regulating system can select corresponding target music from a preset music library according to the music selection parameters.
S206, playing the selected target music, and acquiring the output emotion value of the user calculated by the emotion perception system based on the played target music.
Specifically, the emotion regulating system can select corresponding target music in the music library according to the music selection parameters, and play the target music through the robot system in the emotion regulating system. After the user receiving emotion regulation enjoys music, emotion can change, and the emotion perception system can acquire emotion change data of the user in real time and calculate an output emotion value of the user.
In one embodiment, the step of calculating the output emotion value of the user is similar to the step of calculating the current emotion value of the user, namely, the emotion perception system collects physiological data of the user after the user enjoys music, and calculates the collected physiological data to obtain the output emotion value of the user.
And S208, when the difference between the output emotion value and the expected emotion value is larger than a preset threshold, taking the output emotion value as the current emotion value of the next time, returning to the step of acquiring the current emotion value of the user and the expected emotion value of the user, and repeating the step until the difference between the output emotion value and the expected emotion value is smaller than or equal to the preset threshold.
Specifically, the difference between the output emotion value and the expected emotion value is a key for judging whether the emotion is successfully regulated, the emotion regulating system can set a preset threshold for judging whether the emotion is successfully regulated, and when the difference between the output emotion value and the expected emotion value is larger than the preset threshold, the user is indicated that the expected emotion value is not obtained. And the emotion regulating system takes the output emotion value as the current emotion value of the next time, returns to the step of obtaining the current emotion value of the user and the expected emotion value of the user, and repeatedly executes the steps until the difference between the output emotion value and the expected emotion value is smaller than or equal to a preset threshold value, so that the expected emotion value is obtained by the user, and emotion regulation is successful.
In one embodiment, as shown in fig. 4, camera 1 and camera 4 may capture video for facial feature extraction and gaze estimation, camera 2 may be used to detect skeletal joints, and camera 3 may implement hand tracking. EEG earphone can acquire brain wave data of user, and fingertip pulse oximeter can acquire heart rate blood oxygen data of user. And sending the acquired physiological data of the users to a emotion perception system server. The emotion perception system server can conduct feature extraction and fusion calculation on physiological data of the users, and finally obtains current emotion values of the users. The user interacts with the robot system, and the emotion regulating system server can acquire the current emotion value and the expected emotion value of the user, and select corresponding regulating music according to the current emotion value and the expected emotion value, so that emotion regulation is carried out on the user.
In one embodiment, as shown in FIG. 5, the emotion adjustment system includes a selection policy section, a music selection section, and a play music section. After the user starts emotion adjustment, the emotion perception system can calculate the current emotion value of the user. The selection strategy in the emotion regulating system can acquire the current emotion value and the expected emotion value of the user, calculate music selection parameters according to the current emotion value and the expected emotion value, select corresponding music from the music library according to the music selection parameters, and play the music through the robot in the emotion regulating system. After the user enjoys the music, the emotion can be changed, and the emotion perception system can acquire physiological data of the user in real time and calculate and obtain updated current emotion values. And when the difference between the output emotion value of the user and the expected emotion value of the user is within a preset range, the emotion is regulated to be finished.
In one embodiment, the replacement music is controlled whenever the user's output emotion value differs greatly from the user's expected emotion value. Until the difference becomes smaller within the preset threshold, the emotion adjustment can be considered to be completed. Or, until the difference becomes smaller within a preset threshold, the emotion perception system may continue to monitor, and if the difference is smaller and may remain for a period of time, the emotion adjustment may be deemed complete.
In the emotion regulating method, physiological data of the user is obtained through the emotion perception system, and the current emotion value of the user is calculated according to the physiological data. And taking the current emotion value of the user and the expected emotion value of the user as inputs of an emotion regulating system, calculating a music selection parameter by the emotion regulating system according to the current emotion value and the expected emotion value of the user, and selecting corresponding target music according to the music selection parameter. When a user enjoys target music, emotion can change, and the emotion sensing system monitors emotion change of the user in real time and calculates an output emotion value of the user after enjoying the target music. In order to effectively adjust the emotion of the user based on the emotion feedback of the user so that the user obtains a desired emotion value, the emotion regulating system takes the output emotion value as the current emotion value of the next cycle, calculates the music selection parameters again and selects target music until the user obtains the desired emotion value. Thus, the accuracy of emotion calculation is improved, and the emotion adjustment efficiency is improved.
In one embodiment, the emotion adjustment method further comprises: and when the difference between the output emotion value and the expected emotion value is smaller than or equal to a preset threshold value, continuously acquiring physiological data of the user through the emotion perception system, and updating the output emotion value based on the continuously acquired physiological data until the difference between the output emotion value and the expected emotion value output by the emotion perception system in a preset time period is smaller than or equal to the preset threshold value, stopping so as to finish emotion adjustment of the user.
Specifically, emotion adjustment is performed in a closed loop, and an adjustment strategy can be further adjusted according to emotion feedback of a user so as to ensure that the user can obtain a desired emotion value. The emotion regulating system can compare the output emotion value of the user with the expected emotion value of the user, and when the difference between the output emotion value and the expected emotion value is smaller than or equal to a preset threshold value, the emotion sensing system can continuously collect physiological data of the user and update the output emotion value based on the continuously collected physiological data, so that the influence on the emotion regulating effect caused by larger fluctuation of the emotion value of the user is avoided. And when the difference between the output emotion value and the expected emotion value output by the emotion perception system in the preset time period is smaller than or equal to a preset threshold value, the emotion perception system stops collecting the physiological data of the user, and emotion adjustment of the user is completed at the moment.
In the above embodiment, by judging the difference between the output emotion value and the expected emotion value of the user, when the difference is small, the output emotion value is continuously updated, and when the small difference can be kept for a period of time, it is explained that the emotion regulated by the user is relatively stable, and emotion regulation is completed, so that emotion regulation is more stable and accurate.
In one embodiment, the step of calculating the current emotion value of the user specifically includes: performing feature extraction processing on at least one of face images, joint posture data, brain wave data and heart rate blood oxygen data of a user through an emotion perception system to obtain a corresponding feature extraction result; the current emotion value of the user is obtained through a pre-trained regression model in the emotion perception system and according to the feature extraction result; and acquiring the current emotion value of the user from the emotion perception system.
Specifically, the emotion regulating system comprises a multi-mode emotion calculating system and a pre-trained regression model, wherein the multi-mode emotion calculating system can acquire at least one data of face images, joint posture data, brain wave data and heart rate blood oxygen data of a user, and can process the acquired at least one data of the face images, the joint posture data, the brain wave data and the heart rate blood oxygen data of the user in a preprocessing layer, namely, feature extraction processing is performed, so that a corresponding feature extraction result is obtained. And then, carrying out fusion calculation on the feature extraction result through a regression model in the emotion regulating system to obtain the current emotion value of the user. Furthermore, the emotion regulating system can acquire the current emotion value of the user from the emotion perception system.
In the above embodiment, the current emotion value of the user is obtained by performing fusion calculation on at least one of the face image, the joint posture data, the brain wave data and the heart rate blood oxygen data of the user, so that the physiological data of the user is enriched, and the physiological data of the user is prevented from being single, so that the calculation of the current emotion value of the user is more accurate.
In one embodiment, the current emotion value comprises a current wake value and a current pleasure value, the desired emotion value comprises a desired wake value and a desired pleasure value, and the music selection parameter comprises a music structure parameter and an emotion error parameter. The step of determining a music selection parameter according to the expected emotion value and the current emotion value specifically comprises the following steps: determining a music structure parameter according to the expected awakening value and the expected pleasure value in the expected emotion values; acquiring a previous emotion value of the last time; the previous emotion values include a previous wake value and a previous pleasure value; and determining the emotion error parameter according to the difference between the current awakening value and the previous awakening value and the difference between the current pleasant value and the previous pleasant value.
Wherein the arousal value is the emotional activation degree of the user and the pleasure value is the mood pleasure degree of the user. The musical structure parameter is a parameter of the relationship between the melody of the music itself and the desired wake value and the desired pleasure value. For example, the musical structure parameters may be tempo, rhythm, note, loudness, pitch, and musical style, among others.
Specifically, the music structure parameter is functionally related to a desired emotion value of the user, and the selection strategy in the emotion adjustment system can calculate the music structure parameter according to a desired wake value and a desired pleasure value in the desired emotion value of the user. The emotion error parameter is related to the previous emotion value and the current emotion value, and the selection strategy in the emotion adjustment system can acquire the previous awakening value and the previous pleasure value in the previous emotion values calculated in the previous cycle, and calculate the emotion error parameter according to the difference between the current awakening value and the previous awakening value and the difference between the current pleasure value and the previous pleasure value.
In one embodiment, the music structure parameters may be expressed as:
Tempo:note=0.3-aro d *0.15 (1)
Rhythm:p(note=1)=aro d /5 (2)
Loudness:note=unif{10,20*aro d +12} (3)
Pitch:p(C3)=0.8*val d (4)
Mode=10-(2*val d ) (5)
wherein aro is d Indicating the desired degree of arousal val d Indicating the desired level of pleasure, tempo indicates the Tempo; rhythhm denotes a Rhythm, note denotes a Loudness, pitch denotes a Pitch, C3 denotes a number of steps of a bass Pitch, p (C3) denotes a probability of occurrence of a bass, unif { } denotes a normalization process, and Mode denotes a music style. In one embodiment, equation (1) represents that the beat is functionally related to wake-up. Specifically, the smallest note entity is set to an octave, and its duration is determined by the parameter Tempo. Equation (2) shows that tempo is functionally related to wake-up, in particular, the number of notes played in a bar is randomly set according to a probability determined with respect to wake-up input parameters. This parameter is called rhythmic roughness because it controls the number of notes played, more notes leading to a more complex rhythm. The velocity (loudness) of each note is uniformly randomly set within a range of loudness, which is then determined by the relative loudness parameters of the subsequent tones. Equation (3) shows that the relative loudness is functionally related to wake-up. Equation (4) shows that the pitch is functionally related to the pleasure degree, and in particular, the corresponding pitch can be selected according to the probability of C3 occurrence. Equation (5) shows that the style of music is also related to the pleasure level.
In the embodiment, by calculating the music structure parameter and the emotion error parameter and selecting the music played next time based on the music structure parameter and the emotion error parameter, the emotion of the user can be regulated in real time according to the emotion feedback of the user, and the emotion regulation efficiency is further improved.
In one embodiment, the step of selecting the corresponding target music from the preset music library according to the music selection parameter specifically includes: acquiring previous music selection parameters and current music selection parameters of the time, and comparing the previous music selection parameters with the current music selection parameters; when the difference between the previous music selection parameter and the current music selection parameter is larger than a preset threshold value, selecting corresponding target music from a preset music library directly according to the current music selection parameter; when the difference between the previous music selection parameter and the current music selection parameter is smaller than or equal to a preset threshold value, the current music selection parameter is changed through the noise parameter, and corresponding target music is selected from a preset music library according to the changed current music selection parameter.
Specifically, before the user obtains the expected emotion value, the emotion adjustment system and the emotion perception system may continue to work circularly until the user obtains the expected emotion value. The selection strategy in the emotion regulating system can calculate the corresponding music selection parameters every time of the cycle. The emotion regulating system can acquire the previous music selection parameter calculated in the previous cycle and the current music selection parameter calculated in the current cycle, and the emotion regulating system presets a preset threshold value of the difference, and can compare the previous music selection parameter with the current music selection parameter. When the difference between the previous music selection parameter and the current music selection parameter is larger than a preset threshold value, selecting corresponding target music from a preset music library directly according to the current music selection parameter. When the difference between the previous music selection parameter and the current music selection parameter is smaller than or equal to a preset threshold, in order to avoid that the music parameters calculated in each cycle are the same or similar, a selection strategy in the emotion regulating system can acquire noise parameters, further the current music selection parameter can be changed through the noise parameters, and corresponding target music is selected from a preset music library according to the changed current music selection parameter.
In the above embodiment, by comparing the previous music selection parameter with the current music selection parameter, when the previous music selection parameter is similar to the current music selection parameter, the current music parameter is changed by adding the noise parameter, thus ensuring that the adjustment music played each time is different, keeping curiosity of the user and avoiding fatigue caused by similar music.
It should be understood that, although the steps of fig. 2 are shown sequentially in order, the steps are not necessarily performed sequentially in order. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 2 described above may include a plurality of sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, and the order of execution of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with at least a portion of the sub-steps or stages of other steps or other steps.
In one embodiment, as shown in fig. 6, there is provided an emotion adjustment device 600 comprising: an acquisition module 601, a determination module 602, a playing module 603, and a return module 604, wherein:
An obtaining module 601, configured to obtain a current emotion value of a user and an expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system.
The determining module 602 is configured to determine a music selection parameter according to the expected emotion value and the current emotion value, and select corresponding target music from a preset music library according to the music selection parameter.
The playing module 603 is configured to play the selected target music, and obtain an output emotion value of the user calculated by the emotion perception system based on the played target music.
And a return module 604, configured to take the output emotion value as the current emotion value of the next time when the difference between the output emotion value and the expected emotion value is greater than the preset threshold, and return to the step of obtaining the current emotion value of the user and the expected emotion value of the user and repeat the execution until the difference between the output emotion value and the expected emotion value is less than or equal to the preset threshold.
In one embodiment, the obtaining module 601 is further configured to perform feature extraction processing on at least one data of a face image, joint gesture data, brain wave data, and heart rate blood oxygen data of a user through the emotion perception system, so as to obtain a corresponding feature extraction result; the current emotion value of the user is obtained through a pre-trained regression model in the emotion perception system and according to the feature extraction result; and acquiring the current emotion value of the user from the emotion perception system.
In one embodiment, the determining module 602 is further configured to determine the music structure parameter according to a desired wake value and a desired pleasure value of the desired emotion values; acquiring a previous emotion value of the last time; the previous emotion values include a previous wake value and a previous pleasure value; and determining the emotion error parameter according to the difference between the current awakening value and the previous awakening value and the difference between the current pleasant value and the previous pleasant value.
In one embodiment, the determining module 602 is further configured to obtain a previous music selection parameter and a current music selection parameter of the present time, and compare the previous music selection parameter and the current music selection parameter; when the difference between the previous music selection parameter and the current music selection parameter is larger than a preset threshold value, selecting corresponding target music from a preset music library directly according to the current music selection parameter; when the difference between the previous music selection parameter and the current music selection parameter is smaller than or equal to a preset threshold value, the current music selection parameter is changed through the noise parameter, and corresponding target music is selected from a preset music library according to the changed current music selection parameter.
Referring to fig. 7, in one embodiment, emotion adjustment device 600 further includes an acquisition module 605, wherein:
The collection module 605 is configured to continuously collect physiological data of a user through the emotion sensing system when a difference between the output emotion value and the expected emotion value is less than or equal to a preset threshold, and update the output emotion value based on the physiological data that is continuously collected, until the difference between the output emotion value and the expected emotion value output by the emotion sensing system in a preset time period is less than or equal to the preset threshold, so as to complete emotion adjustment of the user.
According to the emotion regulating device, physiological data of the user is obtained through the emotion perception system, and the current emotion value of the user is calculated according to the physiological data. And taking the current emotion value of the user and the expected emotion value of the user as inputs of an emotion regulating system, calculating a music selection parameter by the emotion regulating system according to the current emotion value and the expected emotion value of the user, and selecting corresponding target music according to the music selection parameter. When a user enjoys target music, emotion can change, and the emotion sensing system monitors emotion change of the user in real time and calculates an output emotion value of the user after enjoying the target music. In order to effectively adjust the emotion of the user based on the emotion feedback of the user so that the user obtains a desired emotion value, the emotion regulating system takes the output emotion value as the current emotion value of the next cycle, calculates the music selection parameters again and selects target music until the user obtains the desired emotion value. Thus, the accuracy of emotion calculation is improved, and the emotion adjustment efficiency is improved.
For specific limitations of the emotion adjustment device, reference is made to the limitations of the emotion adjustment method described above, and no further description is given here. The above-mentioned emotion regulating means may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing emotion adjustment data. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method of emotion adjustment.
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is merely a block diagram of a portion of the structure associated with the present inventive arrangements and is not limiting of the computer device to which the present inventive arrangements are applied, and that a particular computer device may include more or fewer components than shown in fig. 8, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided that includes a memory and a processor, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the emotion adjustment method described above. The emotion adjustment method may be the emotion adjustment method of each of the above embodiments.
In one embodiment, a computer readable storage medium is provided, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the emotion adjustment method described above. The emotion adjustment method may be the emotion adjustment method of each of the above embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. A method of emotion adjustment, the method comprising:
acquiring a current emotion value of a user and an expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system;
determining a music selection parameter according to the expected emotion value and the current emotion value, and selecting corresponding target music from a preset music library according to the music selection parameter;
Playing the selected target music, and acquiring an output emotion value of the user, which is calculated by the emotion perception system based on the played target music;
when the difference between the output emotion value and the expected emotion value is larger than a preset threshold, taking the output emotion value as the current emotion value of the next time, returning to the step of obtaining the current emotion value of the user and the expected emotion value of the user, and repeating the execution until the difference between the output emotion value and the expected emotion value is smaller than or equal to the preset threshold;
the current emotion value comprises a current wake value and a current pleasure value; the expected emotion value comprises an expected awakening value and an expected pleasure value; the music selection parameters comprise music structure parameters and emotion error parameters; the wake-up value is used for representing the emotion activation degree of the user; the pleasure value is used for representing the mood pleasure degree of the user; the music structure parameter is used for representing the relationship between the melody of the music and the expected emotion value; the determining a music selection parameter according to the expected emotion value and the current emotion value comprises:
determining a music structure parameter according to the expected awakening value and the expected pleasure value in the expected emotion values;
Acquiring a previous emotion value of the last time; the previous emotion values include a previous wake value and a previous pleasure value;
and determining an emotion error parameter according to the difference between the current awakening value and the previous awakening value and the difference between the current pleasant value and the previous pleasant value.
2. The method according to claim 1, wherein the method further comprises:
and when the difference between the output emotion value and the expected emotion value is smaller than or equal to a preset threshold value, continuously acquiring physiological data of the user through the emotion perception system, and updating the output emotion value based on the continuously acquired physiological data until the difference between the output emotion value output by the emotion perception system in a preset time period and the expected emotion value is smaller than or equal to the preset threshold value, stopping so as to finish emotion adjustment of the user.
3. The method of claim 1, wherein the physiological data comprises at least one of facial images, joint pose data, brain wave data, and heart rate blood oxygen data of the user.
4. A method according to claim 3, wherein the step of calculating the current emotion value of the user comprises:
Performing feature extraction processing on at least one of face images, joint posture data, brain wave data and heart rate blood oxygen data of a user through the emotion perception system to obtain a corresponding feature extraction result;
obtaining a current emotion value of a user through a pre-trained regression model in the emotion perception system and according to the feature extraction result;
and acquiring the current emotion value of the user from the emotion perception system.
5. The method according to any one of claims 1 to 4, wherein selecting the corresponding target music from a preset music library according to the music selection parameter comprises:
acquiring previous music selection parameters and current music selection parameters of the time, and comparing the previous music selection parameters with the current music selection parameters;
when the difference between the previous music selection parameter and the current music selection parameter is larger than a preset threshold value, selecting corresponding target music from a preset music library directly according to the current music selection parameter;
when the difference between the previous music selection parameter and the current music selection parameter is smaller than or equal to a preset threshold value, the current music selection parameter is changed through a noise parameter, and corresponding target music is selected from a preset music library according to the changed current music selection parameter.
6. An emotion adjustment device, said device comprising:
the acquisition module is used for acquiring the current emotion value of the user and the expected emotion value of the user; the current emotion value is obtained by calculating physiological data of the user through an emotion perception system;
the determining module is used for determining a music selection parameter according to the expected emotion value and the current emotion value and selecting corresponding target music from a preset music library according to the music selection parameter;
the playing module is used for playing the selected target music and acquiring an output emotion value of the user, which is calculated by the emotion perception system based on the played target music;
the return module is used for taking the output emotion value as a current emotion value of the next time when the difference between the output emotion value and the expected emotion value is larger than a preset threshold value, and returning to the step of acquiring the current emotion value of the user and the expected emotion value of the user and repeating the execution until the difference between the output emotion value and the expected emotion value is smaller than or equal to the preset threshold value;
the current emotion value comprises a current wake value and a current pleasure value; the expected emotion value comprises an expected awakening value and an expected pleasure value; the music selection parameters comprise music structure parameters and emotion error parameters; the wake-up value is used for representing the emotion activation degree of the user; the pleasure value is used for representing the mood pleasure degree of the user; the music structure parameter is used for representing the relationship between the melody of the music and the expected emotion value; the determining module is also used for determining a music structure parameter according to the expected awakening value and the expected pleasure value in the expected emotion values; acquiring a previous emotion value of the last time; the previous emotion values include a previous wake value and a previous pleasure value; and determining an emotion error parameter according to the difference between the current awakening value and the previous awakening value and the difference between the current pleasant value and the previous pleasant value.
7. The apparatus of claim 6, wherein the apparatus further comprises:
and the acquisition module is used for continuously acquiring the physiological data of the user through the emotion perception system when the difference between the output emotion value and the expected emotion value is smaller than or equal to a preset threshold value, and updating the output emotion value based on the physiological data which are continuously acquired until the difference between the output emotion value output by the emotion perception system in a preset time period and the expected emotion value is smaller than or equal to the preset threshold value, so as to finish emotion adjustment of the user.
8. The apparatus of claim 6, wherein the physiological data comprises at least one of facial images, joint pose data, brain wave data, and heart rate blood oxygen data of the user.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 5 when the computer program is executed.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 5.
CN202010189363.XA 2020-03-18 2020-03-18 Emotion adjustment method, emotion adjustment device, computer equipment and storage medium Active CN111430006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010189363.XA CN111430006B (en) 2020-03-18 2020-03-18 Emotion adjustment method, emotion adjustment device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010189363.XA CN111430006B (en) 2020-03-18 2020-03-18 Emotion adjustment method, emotion adjustment device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111430006A CN111430006A (en) 2020-07-17
CN111430006B true CN111430006B (en) 2023-09-19

Family

ID=71549615

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010189363.XA Active CN111430006B (en) 2020-03-18 2020-03-18 Emotion adjustment method, emotion adjustment device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111430006B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301185A (en) * 2016-04-15 2017-10-27 富泰华工业(深圳)有限公司 Music commending system and method
CN109582821A (en) * 2018-11-27 2019-04-05 努比亚技术有限公司 A kind of music object recommendation method, terminal and computer readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301185A (en) * 2016-04-15 2017-10-27 富泰华工业(深圳)有限公司 Music commending system and method
CN109582821A (en) * 2018-11-27 2019-04-05 努比亚技术有限公司 A kind of music object recommendation method, terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN111430006A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
US11696714B2 (en) System and method for brain modelling
US10885800B2 (en) Human performance optimization and training methods and systems
CN110193127B (en) Music sleep assisting method and device, computer equipment and storage medium
Miranda Brain-computer music interface for composition and performance
Tran et al. Stethoscope-sensed speech and breath-sounds for person identification with sparse training data
US20230014315A1 (en) Trained model establishment method, estimation method, performance agent recommendation method, performance agent adjustment method, trained model establishment system, estimation system, trained model establishment program, and estimation program
CN117883082A (en) Abnormal emotion recognition method, system, equipment and medium
van den Broek et al. Unobtrusive sensing of emotions (USE)
CN111430006B (en) Emotion adjustment method, emotion adjustment device, computer equipment and storage medium
EP4157057A1 (en) Brain state optimization with audio stimuli
Abdullah et al. A computationally efficient sEMG based silent speech interface using channel reduction and decision tree based classification
US20230014736A1 (en) Performance agent training method, automatic performance system, and program
CN110811646B (en) Emotional stress comprehensive detection and analysis method and device
WO2023075746A1 (en) Detecting emotional state of a user
Luo et al. How does Music Affect Your Brain? A Pilot Study on EEG and Music Features for Automatic Analysis
Bone et al. Behavioral signal processing and autism: Learning from multimodal behavioral signals
CN117539356B (en) Meditation-based interactive user emotion perception method and system
Redekar et al. Heart Rate Prediction from Human Speech using Regression Models
CN118412096B (en) Psychological releasing interaction method and system based on virtual reality and artificial intelligence technology
CN118780948A (en) Talent communication training method, device and medium based on electroencephalogram emotion recognition
Miranda Brain-computer music interface for generative music
Rostami et al. LSTM‐based real‐time stress detection using PPG signals on raspberry Pi
Sharma et al. Learning Aided Estimation of Human Emotion using Speech and ECG Signal for Design of a Contactless Cardio-Vascular Monitoring System
Wang et al. A computational model of the relationship between musical rhythm and heart rhythm
Chen Immersive artificial intelligence technology based on entertainment game experience in simulation of psychological health testing for university students

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant