WO2021139310A1 - Emotion recognition method, apparatus, computer device, and computer-readable storage medium - Google Patents

Emotion recognition method, apparatus, computer device, and computer-readable storage medium Download PDF

Info

Publication number
WO2021139310A1
WO2021139310A1 PCT/CN2020/122418 CN2020122418W WO2021139310A1 WO 2021139310 A1 WO2021139310 A1 WO 2021139310A1 CN 2020122418 W CN2020122418 W CN 2020122418W WO 2021139310 A1 WO2021139310 A1 WO 2021139310A1
Authority
WO
WIPO (PCT)
Prior art keywords
emotional state
state index
fingertip
emotional
preset
Prior art date
Application number
PCT/CN2020/122418
Other languages
French (fr)
Chinese (zh)
Inventor
黄晓君
庄伯金
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021139310A1 publication Critical patent/WO2021139310A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to an emotion recognition method, device, computer equipment, and computer-readable storage medium based on a neural network model.
  • Emotion management is generally divided into emotional state assessment and emotional regulation.
  • the traditional method of emotional state evaluation is subjective evaluation method, according to the description of the subject’s self-perception, and the score is based on a professionally designed psychological scale.
  • the inventor realizes that although this method is simple and common in operation, it has poor objectivity and is more suitable for large sample statistical analysis, and it is difficult for individuals to perform long-term and multiple evaluations. Therefore, in traditional technology, the efficiency of recognizing or evaluating emotions is low.
  • This application provides an emotion recognition method, device, computer equipment, and computer readable storage medium based on a neural network model, which can solve the problem of low emotion recognition efficiency in traditional technologies.
  • the present application provides a method for emotion recognition based on a neural network model, the method includes: acquiring a plurality of fingertip images corresponding to the fingertips; converting the format corresponding to the fingertip images, To obtain the PPG signal corresponding to the fingertip image; extract the heart rate feature and the heart rate variability feature from the PPG signal through a preset neural network model; obtain the PPG signal according to the heart rate feature and the heart rate variability feature The corresponding emotional state index; according to the emotional state index and the preset matching relationship between the emotional state index and the emotional state, the emotional state corresponding to the emotional state index is obtained.
  • the present application also provides an emotion recognition device based on a neural network model, including: a first acquisition unit for acquiring a plurality of fingertip images corresponding to the fingertips of the fingers; a conversion unit for converting the The format corresponding to the fingertip image is converted to obtain the PPG signal corresponding to the fingertip image; the extraction unit is configured to extract the heart rate feature and the heart rate variability feature from the PPG signal through a preset neural network model; second The obtaining unit is used for obtaining the emotional state index corresponding to the PPG signal according to the heart rate characteristic and the heart rate variability characteristic; the third obtaining unit is used for obtaining the emotional state index and the emotional state index and the emotional state according to the emotional state index And obtain the emotional state corresponding to the emotional state index.
  • an embodiment of the present application also provides a computer device, which includes a memory and a processor, and a computer program is stored on the memory.
  • the processor executes the computer program, the following steps are performed: Obtain a number of fingers The fingertip image corresponding to the fingertip; the format corresponding to the fingertip image is converted to obtain the PPG signal corresponding to the fingertip image; the heart rate feature is extracted from the PPG signal through a preset neural network model And heart rate variability characteristics; according to the heart rate characteristics and the heart rate variability characteristics, obtain the emotional state index corresponding to the PPG signal; according to the emotional state index, and the preset matching relationship between the emotional state index and the emotional state To obtain the emotional state corresponding to the emotional state index.
  • the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements the following steps: Obtain several The fingertip image corresponding to the fingertip of the finger; the format corresponding to the fingertip image is converted to obtain the PPG signal corresponding to the fingertip image; the heart rate is extracted from the PPG signal through a preset neural network model Characteristics and heart rate variability characteristics; according to the heart rate characteristics and the heart rate variability characteristics, obtain the emotional state index corresponding to the PPG signal; according to the emotional state index, and a preset match between the emotional state index and the emotional state Relationship to obtain the emotional state corresponding to the emotional state index.
  • This application provides a method, device, computer equipment, and computer-readable storage medium for emotion recognition based on a neural network model.
  • This application acquires several fingertip images corresponding to the fingertips, transforms the format corresponding to the fingertip images, to obtain the PPG signal corresponding to the fingertip images, and obtains the PPG signal corresponding to the fingertip image through a preset neural network model.
  • the heart rate feature and heart rate variability feature are extracted from the PPG signal, the emotional state index corresponding to the PPG signal is obtained according to the heart rate feature and the heart rate variability feature, and the emotional state index and the emotional state index are obtained according to the emotional state index, and the emotional state index and emotional state To obtain the emotional state corresponding to the emotional state index, the embodiment of the application obtains the emotional state indicator according to the fingertip image to obtain the emotional state, which improves the objectivity of emotion recognition, and Can improve the efficiency of emotion recognition.
  • FIG. 1 is a schematic flowchart of a method for emotion recognition based on a neural network model provided by an embodiment of the application;
  • FIG. 2 is a schematic diagram of an emotional state calculation method of an emotion recognition method based on a neural network model provided by an embodiment of the application;
  • FIG. 3 is a schematic diagram of a sub-process in the method for emotion recognition based on a neural network model provided by an embodiment of the application;
  • FIG. 4 is a schematic block diagram of an emotion recognition device based on a neural network model provided by an embodiment of the application.
  • Fig. 5 is a schematic block diagram of a computer device provided by an embodiment of the application.
  • FIG. 1 is a schematic flowchart of a method for emotion recognition based on a neural network model provided by an embodiment of the application. As shown in Figure 1, the method includes the following steps S101-S105:
  • the fingertip image can be captured by the image acquisition device, for example, the fingertip image can be captured by the camera of the mobile terminal device itself, or multiple images within a preset time can be continuously captured by continuous shooting. A good description of the continuous fingertip conditions within the preset time to more accurately reflect the emotional state.
  • each video extracted from the video A frame of fingertip images has continuity and can accurately reflect the fingertip conditions within a preset time.
  • emotion recognition is performed through fingertip images, the accuracy of emotional state recognition can also be improved.
  • S102 Convert the format corresponding to the fingertip image to obtain a PPG signal corresponding to the fingertip image.
  • PPG signal English for Photoplethysmography, abbreviated as PPG, is a photoplethysmography method, also known as photoplethysmography pulse wave, is a detection method to detect changes in blood volume in living tissues by photoelectric means.
  • the contraction and expansion of blood vessels will affect the transmission of light during each heartbeat.
  • the transmission PPG the light passing through the fingertip, when the light passes through the skin tissue and then When it is reflected to the photosensitive sensor, the light will be attenuated to a certain extent.
  • the absorption of light by muscles, bones, veins and other connected tissues is basically unchanged (provided that there is no significant movement of the measurement site), but the arteries will be different. Because of the pulsation of blood in the arteries, the absorption of light will naturally change, resulting in changes in the fingertip image, which is reflected in the changes in the pixels of the fingertip image. Therefore, by changing the pixels contained in the fingertip image
  • the analysis can reflect the characteristics of blood flow, and then reflect the characteristics of heart rate through blood flow.
  • the format corresponding to the fingertip image is converted, and the fingertip image can be described by converting the fingertip image into RGB three-channel mode, and then by comparing the RGB three-channel Perform analysis, such as analyzing the green light channel or the red light channel of the RGB three channels to obtain the PPG signal corresponding to the fingertip image.
  • the RGB three-channel is because RGB images are also called full-color images, which have three channels: R (red), G (green), and B (blue).
  • the Halcon program and Halcon's own images to understand RGB images and grayscale values.
  • the color image can also be separated by three channels through Opencv and Matlab.
  • Opencv and Matlab process color images the storage order of the channels is different.
  • the order of Matlab is R, G, B, while in Opencv ,
  • the arrangement order is B, G, R.
  • HRV heart rate variability
  • pre-training is performed on a preset neural network model in advance, for example, for a recurrent neural network (Recurrent Neural Network in English, or RNN for short), since the neural network model has the ability to automatically learn,
  • the preset neural network model will be based on the sample PPG signal and the sample PPG signal
  • the corresponding heart rate feature and heart rate variability feature are automatically learned, so that after the PPG signal is subsequently obtained, the heart rate feature and heart rate variability feature can be extracted from the PPG signal.
  • the preset neural network model is trained, the training effect of the preset neural network model can be tested by the verification sample. If the preset neural network model passes the verification, the preset neural network model is applied to the production environment.
  • PPG signal is similar to the waveform of arterial blood pressure (ABP), and the heart rate is closely related to blood pressure, the emotional state of a person directly affects the heart rate and blood pressure, and the blood pressure and heart rate also reflect the emotional state of the person. Therefore, this makes the PPG signal non-
  • An invasive heart rate monitoring tool can measure a person's heart rate and blood pressure by analyzing the PPG signal, and identify the person's emotional state. Since the periodicity of the PPG signal corresponds to the heart rhythm, the heart rate can be estimated based on the PPG signal.
  • the characteristics of heart rate variability can be obtained through frequency domain analysis and time domain analysis of the PPG signal.
  • time domain analysis English is Timedomainanalysis
  • HRV time domain analysis is to quantitatively describe the change characteristics of the cardiac cycle by various statistical methods, for example, by measuring and calculating the average RR interval and the longest RR interval within a certain period of time The difference or ratio with the shortest RR interval, as well as the standard deviation of all RR intervals, so as to achieve filtering processing of the original PPG signal to calculate the number of peaks of the PPG signal within a certain period of time, and then the heart rate value can be calculated, For example, if the sampling time is 5 seconds, the number of peaks in 5 seconds is N, then the heart rate is N*12.
  • Frequency domain analysis also known as frequency domain analysis in English, uses a special calculation method to decompose the heart rate fluctuation curve that changes with time into the sum of sinusoidal curves of different frequencies and different amplitudes to obtain the HRV spectrum.
  • the advantage is that the periodicity of heart activity can be quantified.
  • the human HRV power spectrum is often divided into 4 regions: high frequency band, low frequency band, very low frequency band and ultra low frequency band.
  • the frequency can be obtained by performing FFT on the PPG signal. The characteristics of the domain.
  • the PPG signal After the pre-trained preset neural network model is put into the production environment, after receiving the PPG signal, the PPG signal can be analyzed in the time domain and the frequency domain to extract the heart rate characteristics and the PPG signal.
  • the characteristics of heart rate variability the embodiment of the application uses AI healthy fingertip detection technology to well solve the subjective evaluation method and improve the objectivity and accuracy of emotional evaluation.
  • the emotional state index corresponding to the PPG signal is obtained according to the heart rate feature and the heart rate variability feature.
  • FIG. 2 is a schematic diagram of an emotional state calculation method of a neural network model-based emotion recognition method provided by an embodiment of the application.
  • the emotional state indexes such as the vitality index, the stress index, and the fatigue index corresponding to the PPG signal are obtained, so as to be used for quantification. Assess the activity and balance of the autonomic nervous system, as well as the fatigue and stress status of the human body.
  • the statistical relationship between heart rate characteristics and heart rate variability characteristics and emotional state index can be obtained.
  • heart rate characteristics and heart rate can be obtained through training samples
  • the statistical relationship between the fatigue index, the stress index and the vitality index corresponding to the variability feature, and then after the heart rate feature and the heart rate variability feature corresponding to the PPG signal are obtained, the fatigue corresponding to the PPG signal is obtained according to the statistical relationship Index, stress index and vitality index, that is, the emotional state index corresponding to the PPG signal is obtained.
  • Table 1 is the fatigue index level and evaluation table
  • Table 2 is the stress index level and Evaluation table
  • Table 3 is the vitality index grade and evaluation table
  • Table 1 to Table 3 are as follows:
  • the emotional state is classified to form a preset matching relationship between the emotional state index and the emotional state, after obtaining the emotional state index, according to the emotional state index, and The matching relationship is preset, and the emotional state corresponding to the emotional state index is obtained.
  • the matching relationship is preset, and the emotional state corresponding to the emotional state index is obtained.
  • Table 1 to Table 3 to obtain the emotional state corresponding to the emotional index to realize the recognition of the emotional state.
  • the embodiments of this application use smart mobile devices to perform emotional quantification, use smart mobile devices to perform fingertip detection, collect electrocardiogram pulse waves (PPG) combined with neural network RNN algorithms, and extract heart rate and heart rate variability (HRV).
  • PPG electrocardiogram pulse waves
  • HRV heart rate and heart rate variability
  • the step of acquiring a plurality of fingertip images corresponding to the fingertips includes: S301, in response to the instruction to collect the fingertip video, turn on the flash of the camera of the mobile terminal; S302 , Prompt the user to block the camera through the fingertip of the finger; S303, determine whether the camera is blocked by the fingertip of the finger; S304, if the camera is blocked by the fingertip of the finger, point the finger to the finger through the camera Recording with the tip of the finger to obtain a fingertip video, if the camera is not blocked by the fingertip of the finger, return to the step of prompting the user to block the camera with the fingertip of the finger; S305, image the fingertip video Extract to get a fingertip image.
  • the camera of the smart mobile terminal can be used to perform fingertip detection, and then the emotional state can be evaluated.
  • the mobile terminal responds to the user's operation instruction on the preset function, that is, the response Command to collect fingertip video, and then turn on the flash of the camera of the mobile terminal to match the flash of the mobile device, and prompt the user to block the camera with the fingertip of the finger, so that the user can use the finger to block the camera, and judge the camera through image recognition Whether it is blocked by the fingertip of the finger, such as judging whether the image feature corresponding to the extracted occluder matches the preset fingertip image feature, if the image feature corresponding to the extracted occluder matches the preset fingertip image feature , It is determined that the camera is blocked by the fingertip of the finger, and then video
  • the recorded video pixel value will also periodically change, so that the fingertip video image contained in the recorded fingertip video can reflect the user's heart rate , And then reflect the user's emotional state through the heart rate.
  • the step of transforming the format corresponding to the fingertip image to obtain the PPG signal corresponding to the fingertip image includes: performing, according to a preset separation method, each frame of the fingertip image Perform RGB classification to obtain the RGB three channels corresponding to each frame of the fingertip image; separate the red channel from the RGB three channels; calculate the red channel to extract the PPG signal corresponding to the red channel ; Combine all the PPG signals corresponding to the red channel to obtain the PPG signal corresponding to the fingertip image.
  • RGB classification is performed on each frame of the fingertip image according to a preset separation method.
  • the preset separation method includes using the Halcon program, Halcon's own image to understand the RGB image and gray value, or through Opencv and Matlab.
  • the image is separated into three channels to obtain the RGB three channels corresponding to each frame of the fingertip image, and then the heart rate-related pixel data such as the red channel or the green channel are separated from the RGB three channels to obtain the PPG signal , To extract the PPG signal corresponding to the red channel, and combine all the PPG signals corresponding to the red channel to obtain the PPG signal corresponding to the fingertip image so as to convert the format of each frame of video to
  • the PPG signal is obtained, and then combined with a preset neural network model, such as a neural network RNN algorithm, to extract features of heart rate and heart rate variability (HRV).
  • HRV heart rate and heart rate variability
  • the method further includes: obtaining all the emotional state indexes within a preset period of time; and calculating the emotional state indexes corresponding to all the emotional state indexes. Average value of emotional state index;
  • the step of obtaining the emotional state corresponding to the emotional state index according to the emotional state index and the preset matching relationship between the emotional state index and the emotional state includes: according to the emotional state index average value and the emotional state
  • the preset matching relationship between the state index and the emotional state is used to obtain the emotional state corresponding to the average value of the emotional state index.
  • the emotional state results of each fingertip detection such as vitality index, stress index, and fatigue index.
  • the emotional state index analyze people's health trends, record daily detection values as a curve, and monitor the trend of emotional changes . And can obtain all the emotional state indexes within a preset time period, calculate the average value of the emotional state index corresponding to all the emotional state indexes, and calculate the average value of the emotional state index according to the emotional state index and the relationship between the emotional state index and the emotional state.
  • the scoring system of vitality, stress, and fatigue index obtained according to the algorithm is shown in Table 1 to Table 3 above.
  • the emotional state within a preset period of time By measuring the emotional state within a preset period of time by the average value of the emotional state index, the emotional state within a period of time can be obtained more accurately, thereby measuring the stability of the user's emotional state.
  • users who persist in using them for a long time can analyze health trends and digitally display changes in emotional states, thereby identifying mental health risks and quantitatively assessing users’ mental health. Identify the risk of mental illness in time.
  • the method further includes: drawing all the emotional state indexes into emotional trend diagrams; and comparing the emotional trend diagrams with the emotional state indexes.
  • the emotional state corresponding to the average value of the emotional state index is output to display the emotional state.
  • the method further includes: judging whether the emotional state index is less than or equal to a preset emotional state index threshold; if the emotional state index is less than or equal to all The preset emotional state index threshold is output, and the preset emotional guidance prompt is output; it is judged whether the instruction corresponding to the user agreeing to perform the emotional guidance operation is received; if the instruction corresponding to the user agrees to the emotional guidance operation is received, follow the pre-defined Set up an emotional guidance method to guide users to adjust their emotions.
  • the emotional state of the user can be guided through a preset emotional guidance method to improve the emotional state of the user, for example, to determine whether the emotional state index is less than or equal to the preset emotional state index Threshold, if the emotional state index is less than or equal to the preset emotional state index threshold, determine that the user's emotional state is in a bad state, and output a preset emotional guidance prompt to remind the user that the emotional state needs to be improved, otherwise it will bring physical and mental health Can't affect.
  • a preset emotional guidance method to improve the emotional state of the user, for example, to determine whether the emotional state index is less than or equal to the preset emotional state index Threshold, if the emotional state index is less than or equal to the preset emotional state index threshold, determine that the user's emotional state is in a bad state, and output a preset emotional guidance prompt to remind the user that the emotional state needs to be improved, otherwise it will bring physical and mental health Can't affect.
  • the step of guiding the user to adjust emotions according to a preset emotion guidance manner includes: obtaining the emotional state index; obtaining a preset type of healing music corresponding to the emotional state index according to the emotional state index; The preset type of healing music is combined with preset mindfulness training to guide the user to adjust emotions.
  • the breathing pattern of professional breathing prevention such as abdominal breathing method adjusts the rhythm and tone of the music, and adds white noises such as wind, rain, and water droplets according to different theme scenes, and finally forms a preset type of healing music, and the preset type
  • the healing music and the emotional state index form a corresponding matching relationship, that is, different emotional state indexes correspond to different types of healing music, and different emotional states can be targeted to differently assist in regulating emotions and improve the effect of regulating the emotional state.
  • the preset healing lingo corresponding to the meditation music created by AI is combined with the preset mindfulness training method, and the users whose health evaluation is "attention needed” provide effective emotional adjustment services on the mobile device, thereby achieving music therapy Combined with mindfulness training and considering the device characteristics of mobile devices to be used anytime and anywhere, it provides a short-term emotional regulation and enhances the effect of emotional regulation. Since traditional mindfulness training methods generally require users to last for more than 30 minutes, the short-term emotional adjustment provided by the embodiments of this application can provide flexible mindfulness voice guidance on mobile devices, for example, provide 8 kinds of 5-10 minutes mindfulness voice guidance , According to the user’s different life scenarios, we recommend services for emotional regulation on mobile devices.
  • the recommended content includes emotional first aid, mindful breathing, body scanning, preparation before going to bed, restful sleep, alleviating anxiety, accepting anxiety, enhancing concentration, etc. .
  • Users can choose the guidance of mindfulness speech they need according to their emotional state, and cooperate with soothing meditation music and breathing rhythm adjustment, can effectively relax, slowly "thinking" from mode to "feeling" mode, and thinking And attention returns to the body itself, which can effectively calm emotions, and provide an effective and convenient method of emotional adjustment for people who have been in an anxious and stressed life environment for a long time, so as to combine mindfulness training methods with AI-created meditation music, and move intelligently.
  • the device provides a set of short-term emotional adjustment services, allowing users to use fragmented time to adjust emotions, effectively solving users' anxiety, tension, and stress, and improving the efficiency of emotional adjustment.
  • the emotion management based on smart mobile devices can be realized through the above-mentioned methods.
  • AI health fingertip detection technology AI creative meditation music technology and mindfulness training methods
  • a kind of "emotion detection-analysis-quantification” is constructed.
  • the closed-loop system of "assessment-emotion regulation” provides effective emotional management for people.
  • the psychological intervention method guidance provided by professional psychologists or the psychological treatment plan are provided.
  • Such a traditional method of emotional regulation has a high threshold, which is not conducive to people’s daily emotional regulation, and it is easy to cause a long-term backlog of emotional problems or even turn into a mental illness before emotional regulation is carried out.
  • the embodiments of this application improve the efficiency and effectiveness of emotional regulation. Convenience.
  • neural network model-based emotion recognition method described in the above embodiments can recombine the technical features contained in the different embodiments as needed to obtain a combined implementation plan, but they are all in this application. Within the scope of protection required.
  • FIG. 4 is a schematic block diagram of an emotion recognition apparatus based on a neural network model provided by an embodiment of the application.
  • an embodiment of the present application also provides a neural network model-based emotion recognition device.
  • the emotion recognition device based on the neural network model includes a unit for executing the above-mentioned emotion recognition method based on the neural network model.
  • the emotion recognition device based on the neural network model can be configured in a computer such as a mobile terminal. In the device. Specifically, referring to FIG.
  • the emotion recognition apparatus 400 based on a neural network model includes a first acquisition unit 401, a conversion unit 402, an extraction unit 403, a second acquisition unit 404 and a third acquisition unit 405.
  • the first acquiring unit 401 is configured to acquire several fingertip images corresponding to the fingertips of the fingers;
  • the conversion unit 402 is configured to convert the format corresponding to the fingertip images to obtain the fingertip images.
  • extraction unit 403 used to extract the heart rate feature and heart rate variability feature from the PPG signal through a preset neural network model
  • the second acquisition unit 404 used to extract the heart rate feature and the heart rate variability feature according to the heart rate feature and the heart rate variability feature , Obtain the emotional state index corresponding to the PPG signal
  • the third obtaining unit 405 is configured to obtain the emotional state index according to the emotional state index and the preset matching relationship between the emotional state index and the emotional state Corresponding emotional state.
  • the first acquiring unit 401 includes: a response subunit, which is used to turn on the flashlight of the camera of the mobile terminal in response to an instruction to collect a fingertip video; and a prompt subunit, which is used to prompt the user to use the fingertip Blocking the camera; a judging subunit for judging whether the camera is blocked by the fingertip of the finger; a recording subunit for detecting the finger through the camera if the camera is blocked by the fingertip of the finger The fingertips are recorded to obtain a fingertip video; and the extraction subunit is used to perform image extraction on the fingertip video to obtain a fingertip image.
  • the device 400 for emotion recognition based on the neural network model further includes: a fourth obtaining unit, configured to obtain all the emotional state indexes within a preset time period; and a calculating unit, configured to calculate all the emotional state indexes.
  • the third obtaining unit 405 is configured to obtain the emotional state index according to the average value of the emotional state index and the preset matching relationship between the emotional state index and the emotional state.
  • the emotional state corresponding to the average state index is
  • the device 400 for emotion recognition based on the neural network model further includes: a drawing unit for drawing all the emotional state indexes into emotional trend diagrams; a display unit for displaying the emotional trend diagrams and The emotional state corresponding to the average value of the emotional state index is output to display the emotional state.
  • the device 400 for emotion recognition based on the neural network model further includes: a first judging unit, configured to judge whether the emotional state index is less than or equal to a preset emotional state index threshold; and a prompt unit, configured to: The emotional state index is less than or equal to the preset emotional state index threshold, and a preset emotional guidance prompt is output; a second judgment unit is used to judge whether an instruction corresponding to the user agrees to perform an emotional guidance operation is received; the guidance unit, It is used for guiding the user to adjust emotions according to the preset emotion guidance mode if the instruction corresponding to the operation that the user agrees to perform the emotion guidance is received.
  • the guiding unit includes: a first obtaining subunit for obtaining the emotional state index; a second obtaining subunit for obtaining the emotional state index corresponding to the emotional state index
  • the preset type of healing music; the guiding subunit is used to combine the preset type of healing music with preset mindfulness training to guide the user to adjust emotions.
  • the division and connection of the various units in the emotion recognition device based on the neural network model are only used for illustration.
  • the emotion recognition device based on the neural network model can be divided into different units as needed.
  • the units in the emotion recognition device based on the neural network model can be connected in different order and manners to complete all or part of the functions of the emotion recognition device based on the neural network model.
  • the above-mentioned emotion recognition apparatus based on the neural network model can be implemented in the form of a computer program, and the computer program can be run on the computer device shown in FIG. 5.
  • FIG. 5 is a schematic block diagram of a computer device according to an embodiment of the present application.
  • the computer device 500 may be a computer device such as a desktop computer or a server, or may be a component or component in other devices.
  • the computer device 500 includes a processor 502, a memory, and a network interface 505 connected through a system bus 501.
  • the memory may include a non-volatile storage medium 503 and an internal memory 504, and the memory may also include volatile storage. medium.
  • the non-volatile storage medium 503 can store an operating system 5031 and a computer program 5032.
  • the processor 502 can execute the aforementioned emotion recognition method based on the neural network model.
  • the processor 502 is used to provide calculation and control capabilities to support the operation of the entire computer device 500.
  • the internal memory 504 provides an environment for the operation of the computer program 5032 in the non-volatile storage medium 503.
  • the processor 502 can make the processor 502 execute the aforementioned neural network model-based emotion recognition method. .
  • the network interface 505 is used for network communication with other devices.
  • the specific computer device 500 may include more or fewer components than shown in the figure, or combine certain components, or have a different component arrangement.
  • the computer device may only include a memory and a processor.
  • the structures and functions of the memory and the processor are the same as those of the embodiment shown in FIG. 5, which will not be repeated here.
  • the processor 502 is configured to run a computer program 5032 stored in a memory to implement the neural network model-based emotion recognition method described in the embodiment of the present application.
  • the processor 502 may be a central processing unit (Central Processing Unit, CPU), and the processor 502 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), and special purpose processors.
  • Integrated circuit Application Specific Integrated Circuit, ASIC
  • off-the-shelf programmable gate array Field-Programmable Gate Array, FPGA
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program is executed by the processor When the processor executes the steps of the neural network model-based emotion recognition method described in the above embodiments.
  • the storage medium is a physical, non-transitory storage medium, such as a U disk, a mobile hard disk, a read-only memory (Read-Only Memory, ROM), a magnetic disk, or an optical disk, etc., which can store computer programs. medium.
  • a physical, non-transitory storage medium such as a U disk, a mobile hard disk, a read-only memory (Read-Only Memory, ROM), a magnetic disk, or an optical disk, etc., which can store computer programs. medium.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Epidemiology (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

An emotion recognition method, an apparatus, a computer device and a computer-readable storage medium, belonging to the technical field of artificial intelligence. The method comprises: acquiring a plurality of fingertip images corresponding to finger tips (S101): converting the format corresponding to the fingertip images to obtain a PPG signal corresponding to said fingertip images (S102): by means of a preset neural network model, extracting a heart rate feature and a heart rate variation feature from the PPG signal (S103); acquiring, on the basis of said heart rate feature and the heart rate variation feature, an emotional state index corresponding to the PPG signal (S104); on the basis of the emotional state index, and on a preset matching relationship between an emotional state index and emotional states, obtain the emotional state corresponding to the emotional state index (S105).

Description

情绪识别方法、装置、计算机设备及计算机可读存储介质Emotion recognition method, device, computer equipment and computer readable storage medium
本申请要求于2020年07月30日提交中国专利局、申请号为202010752926.1、申请名称为“情绪识别方法、装置、计算机设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office, the application number is 202010752926.1, and the application name is "emotion recognition method, device, computer equipment and computer readable storage medium" on July 30, 2020, and its entire content Incorporated in this application by reference.
技术领域Technical field
本申请涉及人工智能技术领域,尤其涉及一种基于神经网络模型的情绪识别方法、装置、计算机设备及计算机可读存储介质。This application relates to the field of artificial intelligence technology, and in particular to an emotion recognition method, device, computer equipment, and computer-readable storage medium based on a neural network model.
背景技术Background technique
人的负面情绪会导致失眠甚至抑郁等非正常状态,如果这种情绪状态长时间不能得到及时的缓解和释放,轻则引起头痛、失眠、焦虑等一系列不适,重则会增加心脑血管疾病、糖尿病和癌症等一系列慢病风险。情绪作为人体免疫力的首要防线,良好的情绪管理能够有效的帮助用户建立良好的体魄。People’s negative emotions can lead to abnormal states such as insomnia and even depression. If this emotional state cannot be relieved and released in time for a long time, it will cause a series of discomforts such as headache, insomnia, and anxiety, and will increase cardiovascular and cerebrovascular diseases. A series of chronic disease risks such as diabetes, diabetes and cancer. Emotion is the primary line of defense for human immunity, and good emotional management can effectively help users build a good physique.
情绪管理一般分为情绪状态评估和情绪调节。传统的情绪状态评估的方法为主观评定法,根据被测者自我感觉的描述,通过专业设计的心理量表进行评分。发明人意识到这种方法虽然操作简单且常见,但客观性较差以及更适合用于大样本统计分析,而对于个体较难进行长期且多次的评估。因此,传统技术中,对情绪进行识别或者评估的效率较低。Emotion management is generally divided into emotional state assessment and emotional regulation. The traditional method of emotional state evaluation is subjective evaluation method, according to the description of the subject’s self-perception, and the score is based on a professionally designed psychological scale. The inventor realizes that although this method is simple and common in operation, it has poor objectivity and is more suitable for large sample statistical analysis, and it is difficult for individuals to perform long-term and multiple evaluations. Therefore, in traditional technology, the efficiency of recognizing or evaluating emotions is low.
发明内容Summary of the invention
本申请提供了一种基于神经网络模型的情绪识别方法、装置、计算机设备及计算机可读存储介质,能够解决传统技术中对情绪识别效率较低的问题。This application provides an emotion recognition method, device, computer equipment, and computer readable storage medium based on a neural network model, which can solve the problem of low emotion recognition efficiency in traditional technologies.
第一方面,本申请提供了一种基于神经网络模型的情绪识别方法,所述方法包括:获取若干张手指指尖所对应的指尖图像;将所述指尖图像所对应的格式进行转化,以得到所述指尖图像所对应的PPG信号;通过预设神经网络模型从所述PPG信号中提取心率特征和心率变异特征;根据所述心率特征和所述心率变异特征,获取所述PPG信号所对应的情绪状态指数;根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数所对应的情绪状态。In the first aspect, the present application provides a method for emotion recognition based on a neural network model, the method includes: acquiring a plurality of fingertip images corresponding to the fingertips; converting the format corresponding to the fingertip images, To obtain the PPG signal corresponding to the fingertip image; extract the heart rate feature and the heart rate variability feature from the PPG signal through a preset neural network model; obtain the PPG signal according to the heart rate feature and the heart rate variability feature The corresponding emotional state index; according to the emotional state index and the preset matching relationship between the emotional state index and the emotional state, the emotional state corresponding to the emotional state index is obtained.
第二方面,本申请还提供了一种基于神经网络模型的情绪识别装置,包括:第一获取单元,用于获取若干张手指指尖所对应的指尖图像;转化单元,用于将所述指尖图像所对应的格式进行转化,以得到所述指尖图像所对应的PPG信号;提取单元,用于通过预设神经网络模型从所述PPG信号中提取心率特征和心率变异特征;第二获取单元,用于根据所述心率特征和所述心率变异特征,获取所述PPG信号所对应的情绪状态指数;第三获取单元,用于根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数所对应的情绪状态。In a second aspect, the present application also provides an emotion recognition device based on a neural network model, including: a first acquisition unit for acquiring a plurality of fingertip images corresponding to the fingertips of the fingers; a conversion unit for converting the The format corresponding to the fingertip image is converted to obtain the PPG signal corresponding to the fingertip image; the extraction unit is configured to extract the heart rate feature and the heart rate variability feature from the PPG signal through a preset neural network model; second The obtaining unit is used for obtaining the emotional state index corresponding to the PPG signal according to the heart rate characteristic and the heart rate variability characteristic; the third obtaining unit is used for obtaining the emotional state index and the emotional state index and the emotional state according to the emotional state index And obtain the emotional state corresponding to the emotional state index.
第三方面,本申请实施例还提供了一种计算机设备,其包括存储器及处理器,所述存储器上存储有计算机程序,所述处理器执行所述计算机程序时执行如下步骤:获取若干张手指指尖所对应的指尖图像;将所述指尖图像所对应的格式进行转化,以得到所述指尖图像所对应的PPG信号;通过预设神经网络模型从所述PPG信号中提取心率特征和心率变异特征;根据所述心率特征和所述心率变异特征,获取所述PPG信号所对应的情绪状态指数;根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数所对应的情绪状态。In a third aspect, an embodiment of the present application also provides a computer device, which includes a memory and a processor, and a computer program is stored on the memory. When the processor executes the computer program, the following steps are performed: Obtain a number of fingers The fingertip image corresponding to the fingertip; the format corresponding to the fingertip image is converted to obtain the PPG signal corresponding to the fingertip image; the heart rate feature is extracted from the PPG signal through a preset neural network model And heart rate variability characteristics; according to the heart rate characteristics and the heart rate variability characteristics, obtain the emotional state index corresponding to the PPG signal; according to the emotional state index, and the preset matching relationship between the emotional state index and the emotional state To obtain the emotional state corresponding to the emotional state index.
第四方面,本申请还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时使所述处理器实现如下步骤:获取若干张手指指尖所对应的指尖图像;将所述指尖图像所对应的格式进行转化,以得到所述指尖图像所对应的PPG信号;通过预设神经网络模型从所述PPG信号中提取心率特征和心率变异特征;根据所述心率特征和所述心率变异特征,获取所述PPG信号所对应的情绪状态指数;根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数 所对应的情绪状态。In a fourth aspect, the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements the following steps: Obtain several The fingertip image corresponding to the fingertip of the finger; the format corresponding to the fingertip image is converted to obtain the PPG signal corresponding to the fingertip image; the heart rate is extracted from the PPG signal through a preset neural network model Characteristics and heart rate variability characteristics; according to the heart rate characteristics and the heart rate variability characteristics, obtain the emotional state index corresponding to the PPG signal; according to the emotional state index, and a preset match between the emotional state index and the emotional state Relationship to obtain the emotional state corresponding to the emotional state index.
本申请提供了一种基于神经网络模型的情绪识别方法、装置、计算机设备及计算机可读存储介质。本申请通过获取若干张手指指尖所对应的指尖图像,将所述指尖图像所对应的格式进行转化,以得到所述指尖图像所对应的PPG信号,通过预设神经网络模型从所述PPG信号中提取心率特征和心率变异特征,根据所述心率特征和所述心率变异特征,获取所述PPG信号所对应的情绪状态指数,根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数所对应的情绪状态,由于本申请实施例通过根据指尖图像获取情绪状态指示,进而得到情绪状态,提高了对情绪识别的客观性,并且能够提高对情绪识别的效率。This application provides a method, device, computer equipment, and computer-readable storage medium for emotion recognition based on a neural network model. This application acquires several fingertip images corresponding to the fingertips, transforms the format corresponding to the fingertip images, to obtain the PPG signal corresponding to the fingertip images, and obtains the PPG signal corresponding to the fingertip image through a preset neural network model. The heart rate feature and heart rate variability feature are extracted from the PPG signal, the emotional state index corresponding to the PPG signal is obtained according to the heart rate feature and the heart rate variability feature, and the emotional state index and the emotional state index are obtained according to the emotional state index, and the emotional state index and emotional state To obtain the emotional state corresponding to the emotional state index, the embodiment of the application obtains the emotional state indicator according to the fingertip image to obtain the emotional state, which improves the objectivity of emotion recognition, and Can improve the efficiency of emotion recognition.
附图说明Description of the drawings
为了更清楚地说明本申请实施例技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to explain the technical solutions of the embodiments of the present application more clearly, the following will briefly introduce the drawings used in the description of the embodiments. Obviously, the drawings in the following description are some embodiments of the present application. Ordinary technicians can obtain other drawings based on these drawings without creative work.
图1为本申请实施例提供的基于神经网络模型的情绪识别方法的一个流程示意图;FIG. 1 is a schematic flowchart of a method for emotion recognition based on a neural network model provided by an embodiment of the application;
图2为本申请实施例提供的基于神经网络模型的情绪识别方法的一个情绪状态计算方式示意图;FIG. 2 is a schematic diagram of an emotional state calculation method of an emotion recognition method based on a neural network model provided by an embodiment of the application;
图3为本申请实施例提供的基于神经网络模型的情绪识别方法中一个子流程示意图;3 is a schematic diagram of a sub-process in the method for emotion recognition based on a neural network model provided by an embodiment of the application;
图4为本申请实施例提供的基于神经网络模型的情绪识别装置的一个示意性框图;以及FIG. 4 is a schematic block diagram of an emotion recognition device based on a neural network model provided by an embodiment of the application; and
图5为本申请实施例提供的计算机设备的示意性框图。Fig. 5 is a schematic block diagram of a computer device provided by an embodiment of the application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be described clearly and completely in conjunction with the accompanying drawings in the embodiments of the present application. Obviously, the described embodiments are part of the embodiments of the present application, rather than all of them. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
请参阅图1,图1为本申请实施例提供的基于神经网络模型的情绪识别方法的一个流程示意图。如图1所示,该方法包括以下步骤S101-S105:Please refer to FIG. 1. FIG. 1 is a schematic flowchart of a method for emotion recognition based on a neural network model provided by an embodiment of the application. As shown in Figure 1, the method includes the following steps S101-S105:
S101、获取若干张手指指尖所对应的指尖图像。S101. Acquire several fingertip images corresponding to the fingertips of the fingers.
具体地,获取指尖图像,可以通过图像采集设备拍摄指尖图像,比如通过移动终端设备自身的摄像头拍摄指尖图像,还可以通过连拍方式连续拍摄预设时间内的多张图像,以更好的描述预设时间内连续的指尖状况,以更准确的体现情绪状态。另外,也可以通过移动终端自身的摄像头录制指尖视频,然后提取视频中的每一帧画面即可得到若干张指尖图像,并且,由于视频本身所具备的连续性,从视频中提取的每一帧指尖图像具备连续性,能够准确地体现预设时间内的指尖状况,通过指尖图像进行情绪识别时,也能够提高情绪状态识别的准确性。Specifically, to acquire the fingertip image, the fingertip image can be captured by the image acquisition device, for example, the fingertip image can be captured by the camera of the mobile terminal device itself, or multiple images within a preset time can be continuously captured by continuous shooting. A good description of the continuous fingertip conditions within the preset time to more accurately reflect the emotional state. In addition, you can also use the camera of the mobile terminal to record a fingertip video, and then extract each frame of the video to obtain several fingertip images. Moreover, due to the continuity of the video itself, each video extracted from the video A frame of fingertip images has continuity and can accurately reflect the fingertip conditions within a preset time. When emotion recognition is performed through fingertip images, the accuracy of emotional state recognition can also be improved.
S102、将所述指尖图像所对应的格式进行转化,以得到所述指尖图像所对应的PPG信号。S102: Convert the format corresponding to the fingertip image to obtain a PPG signal corresponding to the fingertip image.
其中,PPG信号,英文为Photoplethysmography,简称为PPG,为光体积变化描记图法,又称为光电容积脉搏波,是借光电手段在活体组织中检测血液容积变化的检测方法。Among them, PPG signal, English for Photoplethysmography, abbreviated as PPG, is a photoplethysmography method, also known as photoplethysmography pulse wave, is a detection method to detect changes in blood volume in living tissues by photoelectric means.
具体地,当一定波长的光束照射到指端皮肤表面,每次心跳时,血管的收缩和扩张都会影响光的透射,例如在透射PPG中,通过指尖的光线,当光线透过皮肤组织然后再反射到光敏传感器时,光照会有一定的衰减,像肌肉、骨骼、静脉和其他连接组织对光的吸收是基本不变的(前提是测量部位没有大幅度的运动),但是动脉会不同,由于动脉里有血液的脉动,那么对光的吸收自然也会有所变化,从而导致指尖图像的变化,体现在指尖图像像素上的变化,因此,通过对指尖图像所包含的像素变化的分析,能反应出血液流动的特点,通过血液流动进而体现出心率特征。Specifically, when a light beam of a certain wavelength irradiates the skin surface of the fingertip, the contraction and expansion of blood vessels will affect the transmission of light during each heartbeat. For example, in the transmission PPG, the light passing through the fingertip, when the light passes through the skin tissue and then When it is reflected to the photosensitive sensor, the light will be attenuated to a certain extent. The absorption of light by muscles, bones, veins and other connected tissues is basically unchanged (provided that there is no significant movement of the measurement site), but the arteries will be different. Because of the pulsation of blood in the arteries, the absorption of light will naturally change, resulting in changes in the fingertip image, which is reflected in the changes in the pixels of the fingertip image. Therefore, by changing the pixels contained in the fingertip image The analysis can reflect the characteristics of blood flow, and then reflect the characteristics of heart rate through blood flow.
因此,得到指尖图像后,将所述指尖图像所对应的格式进行转化,可以通过将所述指尖图像转换为RGB三通道的方式来描述指尖图像,进而通过对所述RGB三通道进行分析,比 如对RGB三通道中的绿光通道或者红光通道进行分析,以得到所述指尖图像所对应的PPG信号。其中,RGB三通道,是由于RGB图像也叫全彩图,其有三个通道,分别为:R(red),G(green),B(blue)。例如,用Halcon程序以及Halcon自带图像进行理解RGB图像和灰度值。例如,也可以通过Opencv和Matlab对彩色图像进行三通道的分离,Opencv和Matlab在处理彩色图像的时候,通道的存储顺序是不同的,Matlab的排列顺序是R,G,B,而在Opencv中,排列顺序是B,G,R。Therefore, after the fingertip image is obtained, the format corresponding to the fingertip image is converted, and the fingertip image can be described by converting the fingertip image into RGB three-channel mode, and then by comparing the RGB three-channel Perform analysis, such as analyzing the green light channel or the red light channel of the RGB three channels to obtain the PPG signal corresponding to the fingertip image. Among them, the RGB three-channel is because RGB images are also called full-color images, which have three channels: R (red), G (green), and B (blue). For example, use the Halcon program and Halcon's own images to understand RGB images and grayscale values. For example, the color image can also be separated by three channels through Opencv and Matlab. When Opencv and Matlab process color images, the storage order of the channels is different. The order of Matlab is R, G, B, while in Opencv , The arrangement order is B, G, R.
S103、通过预设神经网络模型从所述PPG信号中提取心率特征和心率变异特征。S103: Extract the heart rate feature and the heart rate variability feature from the PPG signal through a preset neural network model.
其中,心率变异性(即HRV)是指逐次心跳间期的微小差异。Among them, heart rate variability (ie, HRV) refers to the small difference between heartbeat intervals.
具体地,在本申请实施例中,预先通过对预设神经网络模型进行预训练,例如,对循环神经网络(英文为Recurrent Neural Network,简称为RNN),由于神经网络模型具有自动学习的能力,在将包含样本PPG信号及所述样本PPG信号所对应的心率特征、心率变异特征等训练样本输入至预设神经网络后,预设神经网络模型会根据所述样本PPG信号及所述样本PPG信号所对应的心率特征、心率变异特征进行自动学习,以便在后续得到PPG信号后,可以根据PPG信号提取心率特征和心率变异特征。在对预设神经网络模型进行训练后,可以通过验证样本检测对预设神经网络模型的训练效果,若预设神经网络模型通过验证,将预设神经网络模型运用到生产环境中。Specifically, in the embodiment of the present application, pre-training is performed on a preset neural network model in advance, for example, for a recurrent neural network (Recurrent Neural Network in English, or RNN for short), since the neural network model has the ability to automatically learn, After the training samples including the sample PPG signal and the heart rate characteristics and heart rate variability characteristics corresponding to the sample PPG signal are input to the preset neural network, the preset neural network model will be based on the sample PPG signal and the sample PPG signal The corresponding heart rate feature and heart rate variability feature are automatically learned, so that after the PPG signal is subsequently obtained, the heart rate feature and heart rate variability feature can be extracted from the PPG signal. After the preset neural network model is trained, the training effect of the preset neural network model can be tested by the verification sample. If the preset neural network model passes the verification, the preset neural network model is applied to the production environment.
由于PPG信号形态与动脉血压(ABP)波形相似,而心率与血压关系密切,人的情绪状态直接影响着心率和血压,血压和心率也反映着人的情绪状态,因此,这使得PPG信号成为非侵入性心率监测工具,可以通过分析PPG信号,从而衡量人的心率和血压,并且进行识别人的情绪状态。由于PPG信号的周期性与心脏节律相对应,因此,可以根据PPG信号估算心率。Because the shape of PPG signal is similar to the waveform of arterial blood pressure (ABP), and the heart rate is closely related to blood pressure, the emotional state of a person directly affects the heart rate and blood pressure, and the blood pressure and heart rate also reflect the emotional state of the person. Therefore, this makes the PPG signal non- An invasive heart rate monitoring tool can measure a person's heart rate and blood pressure by analyzing the PPG signal, and identify the person's emotional state. Since the periodicity of the PPG signal corresponds to the heart rhythm, the heart rate can be estimated based on the PPG signal.
心率变异性特征可以通过对PPG信号进行频域分析和时域分析获得。The characteristics of heart rate variability can be obtained through frequency domain analysis and time domain analysis of the PPG signal.
其中,时域分析,英文为Timedomainanalysis,HRV的时域分析是以各种统计方法定量描述心动周期的变化特征,例如,通过测量并计算某段时间内的平均RR间期,最长RR间期与最短RR间期的差值或比值,以及所有RR间期的标准差,从而实现通过对原始的PPG信号进行滤波处理,算出一定时间内PPG信号的波峰个数,然后既可算出心率值,例如,假设连续采样5秒的时间,在5s内的波峰个数为N,那么心率就是N*12。Among them, time domain analysis, English is Timedomainanalysis, HRV time domain analysis is to quantitatively describe the change characteristics of the cardiac cycle by various statistical methods, for example, by measuring and calculating the average RR interval and the longest RR interval within a certain period of time The difference or ratio with the shortest RR interval, as well as the standard deviation of all RR intervals, so as to achieve filtering processing of the original PPG signal to calculate the number of peaks of the PPG signal within a certain period of time, and then the heart rate value can be calculated, For example, if the sampling time is 5 seconds, the number of peaks in 5 seconds is N, then the heart rate is N*12.
频域分析,英文为Frequency domain analysis,又称频谱分析,运用特殊计算方法,把随时间变化的心率波动曲线分解成不同频率、不同振幅的正弦曲线之和,即得到HRV的频谱。优点是可以把心脏活动的周期性数量化,人的HRV功率谱常分为4个区域:高频带、低频带、极低频带和超低频带,例如,通过对PPG信号进行FFT变换得到频域的特点。Frequency domain analysis, also known as frequency domain analysis in English, uses a special calculation method to decompose the heart rate fluctuation curve that changes with time into the sum of sinusoidal curves of different frequencies and different amplitudes to obtain the HRV spectrum. The advantage is that the periodicity of heart activity can be quantified. The human HRV power spectrum is often divided into 4 regions: high frequency band, low frequency band, very low frequency band and ultra low frequency band. For example, the frequency can be obtained by performing FFT on the PPG signal. The characteristics of the domain.
预训练好的预设神经网络模型,在投入到生产环境中后,接收到PPG信号后,可以通过对所述PPG信号进行时域分析和频域分析,从所述PPG信号中提取心率特征和心率变异特征,本申请实施例通过AI健康指尖检测技术,很好的解决了主观评定法的问题,提高了对情绪评定时的客观性和准确性。After the pre-trained preset neural network model is put into the production environment, after receiving the PPG signal, the PPG signal can be analyzed in the time domain and the frequency domain to extract the heart rate characteristics and the PPG signal. The characteristics of heart rate variability, the embodiment of the application uses AI healthy fingertip detection technology to well solve the subjective evaluation method and improve the objectivity and accuracy of emotional evaluation.
S104、根据所述心率特征和所述心率变异特征,获取所述PPG信号所对应的情绪状态指数。S104. Obtain an emotional state index corresponding to the PPG signal according to the heart rate characteristic and the heart rate variability characteristic.
具体地,提取心率特征和心率变异特征后,根据所述心率特征和所述心率变异特征,获取所述PPG信号所对应的情绪状态指数。请参阅图2,图2为本申请实施例提供的基于神经网络模型的情绪识别方法的一个情绪状态计算方式示意图。如图2所示,在该实施例中,据所述心率特征和所述心率变异特征,获取所述PPG信号所对应的活力指数、压力指数及疲劳指数等情绪状态指数,从而用于定量化评估自主神经系统的活性与平衡性,以及评估人体疲劳与压力状态。Specifically, after the heart rate feature and the heart rate variability feature are extracted, the emotional state index corresponding to the PPG signal is obtained according to the heart rate feature and the heart rate variability feature. Please refer to FIG. 2, which is a schematic diagram of an emotional state calculation method of a neural network model-based emotion recognition method provided by an embodiment of the application. As shown in FIG. 2, in this embodiment, according to the heart rate characteristics and the heart rate variability characteristics, the emotional state indexes such as the vitality index, the stress index, and the fatigue index corresponding to the PPG signal are obtained, so as to be used for quantification. Assess the activity and balance of the autonomic nervous system, as well as the fatigue and stress status of the human body.
例如,基于已有数据的大数据分析,借助于预训练所使用的训练样本,可以得到心率特征和心率变异特征与情绪状态指数之间的统计关系,例如,通过训练样本可以得到心率特征和心率变异特征所对应的疲劳指数、压力指数及活力指数之间的统计关系,进而在获得PPG 信号所对应的心率特征和心率变异特征后,根据所述统计关系,获取所述PPG信号所对应的疲劳指数、压力指数及活力指数,也即得到了所述PPG信号所对应的情绪状态指数。例如,在一个实例中,心率特征和心率变异特征与情绪状态指数之间的统计关系如下表格1至表格3所示,其中,表1为疲劳指数等级及评价表,表2为压力指数等级及评价表,表3为活力指数等级及评价表,表格1至表格3具体如下:For example, based on big data analysis of existing data, with the help of training samples used in pre-training, the statistical relationship between heart rate characteristics and heart rate variability characteristics and emotional state index can be obtained. For example, heart rate characteristics and heart rate can be obtained through training samples The statistical relationship between the fatigue index, the stress index and the vitality index corresponding to the variability feature, and then after the heart rate feature and the heart rate variability feature corresponding to the PPG signal are obtained, the fatigue corresponding to the PPG signal is obtained according to the statistical relationship Index, stress index and vitality index, that is, the emotional state index corresponding to the PPG signal is obtained. For example, in an example, the statistical relationship between the heart rate characteristics and the heart rate variability characteristics and the emotional state index is shown in the following Tables 1 to 3, where Table 1 is the fatigue index level and evaluation table, and Table 2 is the stress index level and Evaluation table, Table 3 is the vitality index grade and evaluation table, Table 1 to Table 3 are as follows:
表格1Table 1
Figure PCTCN2020122418-appb-000001
Figure PCTCN2020122418-appb-000001
表格2Form 2
Figure PCTCN2020122418-appb-000002
Figure PCTCN2020122418-appb-000002
Figure PCTCN2020122418-appb-000003
Figure PCTCN2020122418-appb-000003
表格3Form 3
Figure PCTCN2020122418-appb-000004
Figure PCTCN2020122418-appb-000004
Figure PCTCN2020122418-appb-000005
Figure PCTCN2020122418-appb-000005
S105、根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数所对应的情绪状态。S105. Obtain the emotional state corresponding to the emotional state index according to the emotional state index and the preset matching relationship between the emotional state index and the emotional state.
具体地,基于先验知识及已有积累的数据,对情绪状态进行分类,形成情绪状态指数与情绪状态之间的预设匹配关系,获取到情绪状态指数后,根据所述情绪状态指数,及预设匹配关系,获取所述情绪状态指数所对应的情绪状态。例如,请继续参阅表格1至表格3,即可得到情绪指数所对应的情绪状态,以实现对情绪状态的识别。本申请实施例在实现量化情绪健康时,通过智能移动设备进行情绪量化,通过智能移动设备进行指尖检测,收集心电脉搏波(PPG)结合神经网络RNN算法,提取心率和心率变异性(HRV)的特征来检测用户的活力压力疲劳状态,客观的量化当前的和长期的情绪状态,并且实现基于神经网络模型的情绪识别的低门槛,不需要额外的外接设备即可获取用户的身心健康数据指标。Specifically, based on prior knowledge and accumulated data, the emotional state is classified to form a preset matching relationship between the emotional state index and the emotional state, after obtaining the emotional state index, according to the emotional state index, and The matching relationship is preset, and the emotional state corresponding to the emotional state index is obtained. For example, please continue to refer to Table 1 to Table 3 to obtain the emotional state corresponding to the emotional index to realize the recognition of the emotional state. When realizing the quantification of emotional health, the embodiments of this application use smart mobile devices to perform emotional quantification, use smart mobile devices to perform fingertip detection, collect electrocardiogram pulse waves (PPG) combined with neural network RNN algorithms, and extract heart rate and heart rate variability (HRV). ) Features to detect the user’s vitality, stress, fatigue state, objectively quantify the current and long-term emotional state, and achieve a low threshold for emotion recognition based on the neural network model, without the need for additional external equipment to obtain the user’s physical and mental health data index.
请参阅图3,图3为本申请实施例提供的基于神经网络模型的情绪识别方法中一个子流程示意图。如图3所示,在该实施例中,所述获取若干张手指指尖所对应的指尖图像的步骤包括:S301、响应于采集指尖视频的指令,打开移动终端的摄像头的闪光灯;S302、提示用户通过手指指尖遮挡所述摄像头;S303、判断所述摄像头是否被所述手指指尖遮挡;S304、若所述摄像头被所述手指指尖遮挡,通过所述摄像头对所述手指指尖进行录制,以得到指尖视频,若所述摄像头未被所述手指指尖遮挡,返回执行所述提示用户通过手指指尖遮挡所述摄像头的步骤;S305、对所述指尖视频进行图像提取以得到指尖图像。Please refer to FIG. 3, which is a schematic diagram of a sub-process in the method for emotion recognition based on a neural network model provided by an embodiment of the application. As shown in FIG. 3, in this embodiment, the step of acquiring a plurality of fingertip images corresponding to the fingertips includes: S301, in response to the instruction to collect the fingertip video, turn on the flash of the camera of the mobile terminal; S302 , Prompt the user to block the camera through the fingertip of the finger; S303, determine whether the camera is blocked by the fingertip of the finger; S304, if the camera is blocked by the fingertip of the finger, point the finger to the finger through the camera Recording with the tip of the finger to obtain a fingertip video, if the camera is not blocked by the fingertip of the finger, return to the step of prompting the user to block the camera with the fingertip of the finger; S305, image the fingertip video Extract to get a fingertip image.
具体地,通过智能移动终端实现情绪识别进而实现情绪管理时,可以通过智能移动终端的摄像头进行指尖检测,然后评估情绪状态。需要通过智能移动设备的摄像头采集心电脉搏波(即PPG)信号时,比如,用户打开情绪管理进行情绪识别的应用程序时,移动终端响应于用户对预设功能的操作指令,即得到响应于采集指尖视频的指令,然后打开移动终端的摄像头的闪光灯,以配合移动设备的闪光灯,并提示用户通过手指指尖遮挡所述摄像头,以使用户使用手指遮挡摄像头,通过图像识别判断所述摄像头是否被所述手指指尖遮挡,比如判断提取遮挡物所对应的图像特征是否与预设的指尖图像特征相匹配,若提取遮挡物所对应的图像特征与预设的指尖图像特征相匹配,判定为所述摄像头被所述手指指尖遮挡,然后对手指指尖进行视频录制,若所述摄像头未被所述手指指尖遮挡,返回执行所述提示用户通过手指指尖遮挡所述摄像头的步骤。由于手指指尖血流量会随血管容积变化发生周期性的波动,导致录制的视频像素值也会产生周期性的变化,从而使录制的指尖视频所包含的指尖视频图像能够反映用户的心率,进而通过心率反映出用户的情绪状态。Specifically, when emotion recognition and emotional management are realized through a smart mobile terminal, the camera of the smart mobile terminal can be used to perform fingertip detection, and then the emotional state can be evaluated. When it is necessary to collect the ECG pulse wave (ie PPG) signal through the camera of the smart mobile device, for example, when the user opens the emotion management application for emotion recognition, the mobile terminal responds to the user's operation instruction on the preset function, that is, the response Command to collect fingertip video, and then turn on the flash of the camera of the mobile terminal to match the flash of the mobile device, and prompt the user to block the camera with the fingertip of the finger, so that the user can use the finger to block the camera, and judge the camera through image recognition Whether it is blocked by the fingertip of the finger, such as judging whether the image feature corresponding to the extracted occluder matches the preset fingertip image feature, if the image feature corresponding to the extracted occluder matches the preset fingertip image feature , It is determined that the camera is blocked by the fingertip of the finger, and then video recording is performed on the fingertip of the finger, if the camera is not blocked by the fingertip of the finger, return to the execution of the prompting the user to block the camera with the fingertip of the finger A step of. Because the fingertip blood flow will fluctuate periodically with the change of blood vessel volume, the recorded video pixel value will also periodically change, so that the fingertip video image contained in the recorded fingertip video can reflect the user's heart rate , And then reflect the user's emotional state through the heart rate.
在一个实施例中,所述将所述指尖图像所对应的格式进行转化,以得到所述指尖图像所对应的PPG信号的步骤包括:根据预设分离方法对每帧所述指尖图像进行RGB分类,以得到每帧所述指尖图像所对应的RGB三通道;从所述RGB三通道中分离出红色通道;对所述红色通道进行计算,以提取出红色通道所对应的PPG信号;将所有所述红色通道所对应的PPG信号进行组合,以得到所述指尖图像所对应的PPG信号。In one embodiment, the step of transforming the format corresponding to the fingertip image to obtain the PPG signal corresponding to the fingertip image includes: performing, according to a preset separation method, each frame of the fingertip image Perform RGB classification to obtain the RGB three channels corresponding to each frame of the fingertip image; separate the red channel from the RGB three channels; calculate the red channel to extract the PPG signal corresponding to the red channel ; Combine all the PPG signals corresponding to the red channel to obtain the PPG signal corresponding to the fingertip image.
具体地,根据预设分离方法对每帧所述指尖图像进行RGB分类,预设分离方法包括用Halcon程序、Halcon自带图像进行理解RGB图像和灰度值,也可以通过Opencv和Matlab对彩色图像进行三通道的分离,以得到每帧所述指尖图像所对应的RGB三通道,进而从所述 RGB三通道中分离出红色通道或者绿色通道等与心率相关的像素数据,以得到PPG信号,以提取出红色通道所对应的PPG信号,将所有所述红色通道所对应的PPG信号进行组合,以得到所述指尖图像所对应的PPG信号从而实现将每帧视频的格式进行转换,以得到PPG信号,然后结合预设神经网络模型,例如神经网络RNN算法,从而进行心率和心率变异性(HRV)特征的提取。Specifically, RGB classification is performed on each frame of the fingertip image according to a preset separation method. The preset separation method includes using the Halcon program, Halcon's own image to understand the RGB image and gray value, or through Opencv and Matlab. The image is separated into three channels to obtain the RGB three channels corresponding to each frame of the fingertip image, and then the heart rate-related pixel data such as the red channel or the green channel are separated from the RGB three channels to obtain the PPG signal , To extract the PPG signal corresponding to the red channel, and combine all the PPG signals corresponding to the red channel to obtain the PPG signal corresponding to the fingertip image so as to convert the format of each frame of video to The PPG signal is obtained, and then combined with a preset neural network model, such as a neural network RNN algorithm, to extract features of heart rate and heart rate variability (HRV).
在一个实施例中,所述得到所述PPG信号所对应的情绪状态指数的步骤之后,还包括:获取预设时间段内的所有所述情绪状态指数;计算所有所述情绪状态指数所对应的情绪状态指数平均值;In one embodiment, after the step of obtaining the emotional state index corresponding to the PPG signal, the method further includes: obtaining all the emotional state indexes within a preset period of time; and calculating the emotional state indexes corresponding to all the emotional state indexes. Average value of emotional state index;
所述根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数所对应的情绪状态的步骤包括:根据所述情绪状态指数平均值,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数平均值所对应的情绪状态。The step of obtaining the emotional state corresponding to the emotional state index according to the emotional state index and the preset matching relationship between the emotional state index and the emotional state includes: according to the emotional state index average value and the emotional state The preset matching relationship between the state index and the emotional state is used to obtain the emotional state corresponding to the average value of the emotional state index.
具体地,记录每次指尖检测的情绪状态结果,例如活力指数、压力指数及疲劳指数,根据情绪状态指数,分析人的健康趋势,将每日的检测数值记录成曲线,监测情绪变化的趋势。并可以获取预设时间段内的所有所述情绪状态指数,计算所有所述情绪状态指数所对应的情绪状态指数平均值,根据所述情绪状态指数平均值,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数平均值所对应的情绪状态,例如,计算7天的情绪状态结果的平均值作为Baseline(基准指标参考),直观对比了解当前的情绪状态。对于坚持使用情绪检测7天的用户,计算7天均值并提供健康评价与建议。根据算法得出的活力、压力、疲劳指数的评分体系如上表格1至表格3所示。通过情绪状态指数平均值衡量预设时间段内的情绪状态,能够更准确的得到一段时间内地情绪状态,从而衡量用户情绪状态的稳定性。通过预设时间段内对用户的情绪进行情绪监测,对于长时间坚持使用的用户,可以进行健康趋势的分析,数字化展现情绪状态的变化,从而识别心理健康风险,并量化评估用户的心理健康,及时识别患上心理疾病的风险。Specifically, record the emotional state results of each fingertip detection, such as vitality index, stress index, and fatigue index. According to the emotional state index, analyze people's health trends, record daily detection values as a curve, and monitor the trend of emotional changes . And can obtain all the emotional state indexes within a preset time period, calculate the average value of the emotional state index corresponding to all the emotional state indexes, and calculate the average value of the emotional state index according to the emotional state index and the relationship between the emotional state index and the emotional state. To obtain the emotional state corresponding to the average value of the emotional state index, for example, calculate the average value of the emotional state results for 7 days as the Baseline (benchmark index reference), and intuitively compare and understand the current emotional state. For users who insist on using emotion detection for 7 days, calculate the 7-day average and provide health evaluation and suggestions. The scoring system of vitality, stress, and fatigue index obtained according to the algorithm is shown in Table 1 to Table 3 above. By measuring the emotional state within a preset period of time by the average value of the emotional state index, the emotional state within a period of time can be obtained more accurately, thereby measuring the stability of the user's emotional state. Through emotional monitoring of users’ emotions within a preset time period, users who persist in using them for a long time can analyze health trends and digitally display changes in emotional states, thereby identifying mental health risks and quantitatively assessing users’ mental health. Identify the risk of mental illness in time.
在一个实施例中,所述获取所述情绪状态指数平均值所对应的情绪状态的步骤之后,还包括:将所有所述情绪状态指数绘制成情绪趋势示意图;将所述情绪趋势示意图和所述情绪状态指数平均值所对应的情绪状态输出以进行情绪状态显示。In one embodiment, after the step of obtaining the emotional state corresponding to the average value of the emotional state index, the method further includes: drawing all the emotional state indexes into emotional trend diagrams; and comparing the emotional trend diagrams with the emotional state indexes. The emotional state corresponding to the average value of the emotional state index is output to display the emotional state.
具体地,将预设时间段内地所有所述情绪状态指数绘制成情绪趋势示意图,将所述情绪趋势示意图和所述情绪状态指数平均值所对应的情绪状态输出以进行情绪状态显示,可以直观的让用户了解自身的情绪状态,从而提升情绪管理的效率。Specifically, all the emotional state indexes within a preset time period are drawn into emotional trend diagrams, and the emotional state corresponding to the emotional trend diagram and the average value of the emotional state indexes are output for emotional state display, which can be intuitive Let users understand their own emotional state, thereby improving the efficiency of emotional management.
具体地,所述获取所述情绪状态指数所对应的情绪状态的步骤之后,还包括:判断所述情绪状态指数是否小于或者等于预设情绪状态指数阈值;若所述情绪状态指数小于或者等于所述预设情绪状态指数阈值,输出预设情绪引导提示;判断是否接收到用户同意进行情绪引导的操作所对应的指令;若接收到所述用户同意进行情绪引导的操作所对应的指令,按照预设情绪引导方式引导用户进行情绪调节。Specifically, after the step of obtaining the emotional state corresponding to the emotional state index, the method further includes: judging whether the emotional state index is less than or equal to a preset emotional state index threshold; if the emotional state index is less than or equal to all The preset emotional state index threshold is output, and the preset emotional guidance prompt is output; it is judged whether the instruction corresponding to the user agreeing to perform the emotional guidance operation is received; if the instruction corresponding to the user agrees to the emotional guidance operation is received, follow the pre-defined Set up an emotional guidance method to guide users to adjust their emotions.
具体地,若用户情绪状态处于非良好状态,可以对用户情绪状态通过预设情绪引导方式进行引导,以改良用户的情绪状态,例如,判断所述情绪状态指数是否小于或者等于预设情绪状态指数阈值,若所述情绪状态指数小于或者等于所述预设情绪状态指数阈值,判定用户情绪状态处于不良状态,输出预设情绪引导提示,以提醒用户需要改善情绪状态,否则会对身心健康带来不了影响。在对用户进行情绪引导提示后,判断是否接收到用户同意进行情绪引导的操作所对应的指令,若接收到所述用户同意进行情绪引导的操作所对应的指令,按照预设情绪引导方式引导用户进行情绪调节。通过本申请实施例,可以通过提供在智能移动设备上,实现对用户的情绪进行量化情绪、情绪监测以实现识别心理健康风险,并在用户情绪处于不良状态时,使用的短时情绪调节服务,降低用户患上心理疾病的风险,起到心理健康辅助调节的作用。Specifically, if the emotional state of the user is not in a good state, the emotional state of the user can be guided through a preset emotional guidance method to improve the emotional state of the user, for example, to determine whether the emotional state index is less than or equal to the preset emotional state index Threshold, if the emotional state index is less than or equal to the preset emotional state index threshold, determine that the user's emotional state is in a bad state, and output a preset emotional guidance prompt to remind the user that the emotional state needs to be improved, otherwise it will bring physical and mental health Can't affect. After the user is prompted with emotional guidance, it is determined whether the instruction corresponding to the user's consent to the emotional guidance operation is received, and if the instruction corresponding to the user's consent to the emotional guidance operation is received, the user is guided according to the preset emotional guidance method Perform emotional regulation. Through the embodiments of this application, it is possible to implement quantified emotion and emotion monitoring of the user’s emotions by providing it on a smart mobile device to identify mental health risks, and use short-term emotion regulation services when the user’s emotions are in a bad state, Reduce the user's risk of suffering from mental illness and play a role in assisting the adjustment of mental health.
进一步地,所述按照预设情绪引导方式引导用户进行情绪调节的步骤包括:获取所述情绪状态指数;根据所述情绪状态指数,获取所述情绪状态指数所对应的预设类型疗愈音乐; 将所述预设类型疗愈音乐与预设正念训练相结合以引导用户进行情绪调节。Further, the step of guiding the user to adjust emotions according to a preset emotion guidance manner includes: obtaining the emotional state index; obtaining a preset type of healing music corresponding to the emotional state index according to the emotional state index; The preset type of healing music is combined with preset mindfulness training to guide the user to adjust emotions.
具体地,可以通过AI创作冥想音乐,通过AI机器学习,学习疗愈音乐特定风格的作品,运用多层序列模型和高纬度音乐特征提取的方法创作疗愈音乐的主旋律,并根据平衡呼吸法、腹式呼吸法等专业呼吸防范的呼吸规律调节音乐节拍及音调,并根据不同的主题场景,增加风声,雨声,水滴声等白噪音,最终形成预设类型疗愈音乐,并将预设类型疗愈音乐与情绪状态指数形成对应的匹配关系,即不同的情绪状态指数对应不同类型的疗愈音乐,以针对性的对不同情绪状态进行不同的辅助调节情绪,提升对情绪状态的调节效果。Specifically, you can create meditation music through AI, learn specific styles of healing music through AI machine learning, use multi-layer sequence models and high-latitude music feature extraction methods to create the main theme of healing music, and use balanced breathing methods, The breathing pattern of professional breathing prevention such as abdominal breathing method adjusts the rhythm and tone of the music, and adds white noises such as wind, rain, and water droplets according to different theme scenes, and finally forms a preset type of healing music, and the preset type The healing music and the emotional state index form a corresponding matching relationship, that is, different emotional state indexes correspond to different types of healing music, and different emotional states can be targeted to differently assist in regulating emotions and improve the effect of regulating the emotional state.
然后将AI创作冥想音乐所对应的预设疗愈隐语与预设正念训练的方法结合,对健康评价为“需注意”的用户,在移动设备端提供有效调节情绪的服务,从而实现通过音乐疗法与正念训练结合,以及考虑移动设备的随时随地使用的设备特性,提供一种短时情绪调节,提升对情绪调节效果。由于传统的正念训练方式一般需要用户持续30分钟以上,而本申请实施例提供的短时情绪调节,在移动设备上可以提供灵活的正念语音引导,例如提供8种5-10分钟的正念语音引导,从而根据用户不同的生活场景,在移动设备上进行情绪调节的服务推荐,推荐内容包括情绪急救、正念呼吸、身体扫描、睡前准备、安稳入眠、缓解焦虑、接纳焦虑、提升注意力等方面。用户根据自己的情绪状态,选择自己需要的正念语音的引导,配合舒缓的冥想音乐与呼吸韵律调节,就能进行有效的放松,慢慢地“思考”从模式转化为“感受”模式,将思维和注意力回归到自己身体本身,从而能够有效平缓情绪,为长期处于焦虑、压力生活环境的人群提供有效的和方便的情绪调节方法,从而通过正念训练方法和AI创作冥想音乐结合,在智能移动设备上提供一套短时情绪调节的服务,让用户可以利用碎片化时间进行情绪调节,有效的解决用户的焦虑、紧张、压力等问题,提高了情绪调节的效率。Then, the preset healing lingo corresponding to the meditation music created by AI is combined with the preset mindfulness training method, and the users whose health evaluation is "attention needed" provide effective emotional adjustment services on the mobile device, thereby achieving music therapy Combined with mindfulness training and considering the device characteristics of mobile devices to be used anytime and anywhere, it provides a short-term emotional regulation and enhances the effect of emotional regulation. Since traditional mindfulness training methods generally require users to last for more than 30 minutes, the short-term emotional adjustment provided by the embodiments of this application can provide flexible mindfulness voice guidance on mobile devices, for example, provide 8 kinds of 5-10 minutes mindfulness voice guidance , According to the user’s different life scenarios, we recommend services for emotional regulation on mobile devices. The recommended content includes emotional first aid, mindful breathing, body scanning, preparation before going to bed, restful sleep, alleviating anxiety, accepting anxiety, enhancing concentration, etc. . Users can choose the guidance of mindfulness speech they need according to their emotional state, and cooperate with soothing meditation music and breathing rhythm adjustment, can effectively relax, slowly "thinking" from mode to "feeling" mode, and thinking And attention returns to the body itself, which can effectively calm emotions, and provide an effective and convenient method of emotional adjustment for people who have been in an anxious and stressed life environment for a long time, so as to combine mindfulness training methods with AI-created meditation music, and move intelligently. The device provides a set of short-term emotional adjustment services, allowing users to use fragmented time to adjust emotions, effectively solving users' anxiety, tension, and stress, and improving the efficiency of emotional adjustment.
在本申请实施例中,通过上述方式可以实现基于智能移动设备的情绪管理,通过将AI健康指尖检测技术、AI创作冥想音乐技术与正念训练方法结合,构建一种“情绪检测-分析-量化评估-情绪调节”的闭环体系,对人进行有效的情绪管理,相比传统技术中,通过传统的电话热线或心理咨询室,得到专业的心理专家提供的心理干预方法指导或提供心理治疗的方案,这样传统方式中进行情绪调节的方法,门槛较高,不利于人们日常的情绪调节,容易使情绪问题长期积压甚至转化为心理疾病才进行情绪调节,本申请实施例提升了情绪调节的效率和便捷性。In the embodiments of this application, the emotion management based on smart mobile devices can be realized through the above-mentioned methods. By combining AI health fingertip detection technology, AI creative meditation music technology and mindfulness training methods, a kind of "emotion detection-analysis-quantification" is constructed. The closed-loop system of "assessment-emotion regulation" provides effective emotional management for people. Compared with traditional technology, through the traditional telephone hotline or psychological counseling room, the psychological intervention method guidance provided by professional psychologists or the psychological treatment plan are provided. Such a traditional method of emotional regulation has a high threshold, which is not conducive to people’s daily emotional regulation, and it is easy to cause a long-term backlog of emotional problems or even turn into a mental illness before emotional regulation is carried out. The embodiments of this application improve the efficiency and effectiveness of emotional regulation. Convenience.
需要说明的是,上述各个实施例所述的基于神经网络模型的情绪识别方法,可以根据需要将不同实施例中包含的技术特征重新进行组合,以获取组合后的实施方案,但都在本申请要求的保护范围之内。It should be noted that the neural network model-based emotion recognition method described in the above embodiments can recombine the technical features contained in the different embodiments as needed to obtain a combined implementation plan, but they are all in this application. Within the scope of protection required.
请参阅图4,图4为本申请实施例提供的基于神经网络模型的情绪识别装置的一个示意性框图。对应于上述所述基于神经网络模型的情绪识别方法,本申请实施例还提供一种基于神经网络模型的情绪识别装置。如图4所示,该基于神经网络模型的情绪识别装置包括用于执行上述所述基于神经网络模型的情绪识别方法的单元,该基于神经网络模型的情绪识别装置可以被配置于移动终端等计算机设备中。具体地,请参阅图4,该基于神经网络模型的情绪识别装置400包括第一获取单元401、转化单元402、提取单元403、第二获取单元404及第三获取单元405。其中,第一获取单元401,用于获取若干张手指指尖所对应的指尖图像;转化单元402,用于将所述指尖图像所对应的格式进行转化,以得到所述指尖图像所对应的PPG信号;提取单元403,用于通过预设神经网络模型从所述PPG信号中提取心率特征和心率变异特征;第二获取单元404,用于根据所述心率特征和所述心率变异特征,获取所述PPG信号所对应的情绪状态指数;第三获取单元405,用于根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数所对应的情绪状态。Please refer to FIG. 4, which is a schematic block diagram of an emotion recognition apparatus based on a neural network model provided by an embodiment of the application. Corresponding to the foregoing neural network model-based emotion recognition method, an embodiment of the present application also provides a neural network model-based emotion recognition device. As shown in FIG. 4, the emotion recognition device based on the neural network model includes a unit for executing the above-mentioned emotion recognition method based on the neural network model. The emotion recognition device based on the neural network model can be configured in a computer such as a mobile terminal. In the device. Specifically, referring to FIG. 4, the emotion recognition apparatus 400 based on a neural network model includes a first acquisition unit 401, a conversion unit 402, an extraction unit 403, a second acquisition unit 404 and a third acquisition unit 405. Wherein, the first acquiring unit 401 is configured to acquire several fingertip images corresponding to the fingertips of the fingers; the conversion unit 402 is configured to convert the format corresponding to the fingertip images to obtain the fingertip images. Corresponding PPG signal; extraction unit 403, used to extract the heart rate feature and heart rate variability feature from the PPG signal through a preset neural network model; the second acquisition unit 404, used to extract the heart rate feature and the heart rate variability feature according to the heart rate feature and the heart rate variability feature , Obtain the emotional state index corresponding to the PPG signal; the third obtaining unit 405 is configured to obtain the emotional state index according to the emotional state index and the preset matching relationship between the emotional state index and the emotional state Corresponding emotional state.
在一个实施例中,所述第一获取单元401包括:响应子单元,用于响应于采集指尖视频的指令,打开移动终端的摄像头的闪光灯;提示子单元,用于提示用户通过手指指尖遮挡所述摄像头;判断子单元,用于判断所述摄像头是否被所述手指指尖遮挡;录制子单元,用于若所述摄像头被所述手指指尖遮挡,通过所述摄像头对所述手指指尖进行录制,以得到指尖 视频;提取子单元,用于对所述指尖视频进行图像提取以得到指尖图像。In one embodiment, the first acquiring unit 401 includes: a response subunit, which is used to turn on the flashlight of the camera of the mobile terminal in response to an instruction to collect a fingertip video; and a prompt subunit, which is used to prompt the user to use the fingertip Blocking the camera; a judging subunit for judging whether the camera is blocked by the fingertip of the finger; a recording subunit for detecting the finger through the camera if the camera is blocked by the fingertip of the finger The fingertips are recorded to obtain a fingertip video; and the extraction subunit is used to perform image extraction on the fingertip video to obtain a fingertip image.
在一个实施例中,所述基于神经网络模型的情绪识别装置400还包括:第四获取单元,用于获取预设时间段内的所有所述情绪状态指数;计算单元,用于计算所有所述情绪状态指数所对应的情绪状态指数平均值;所述第三获取单元405,用于根据所述情绪状态指数平均值,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数平均值所对应的情绪状态。In one embodiment, the device 400 for emotion recognition based on the neural network model further includes: a fourth obtaining unit, configured to obtain all the emotional state indexes within a preset time period; and a calculating unit, configured to calculate all the emotional state indexes. The average value of the emotional state index corresponding to the emotional state index; the third obtaining unit 405 is configured to obtain the emotional state index according to the average value of the emotional state index and the preset matching relationship between the emotional state index and the emotional state. The emotional state corresponding to the average state index.
在一个实施例中,所述基于神经网络模型的情绪识别装置400还包括:绘制单元,用于将所有所述情绪状态指数绘制成情绪趋势示意图;显示单元,用于将所述情绪趋势示意图和所述情绪状态指数平均值所对应的情绪状态输出以进行情绪状态显示。In one embodiment, the device 400 for emotion recognition based on the neural network model further includes: a drawing unit for drawing all the emotional state indexes into emotional trend diagrams; a display unit for displaying the emotional trend diagrams and The emotional state corresponding to the average value of the emotional state index is output to display the emotional state.
在一个实施例中,所述基于神经网络模型的情绪识别装置400还包括:第一判断单元,用于判断所述情绪状态指数是否小于或者等于预设情绪状态指数阈值;提示单元,用于若所述情绪状态指数小于或者等于所述预设情绪状态指数阈值,输出预设情绪引导提示;第二判断单元,用于判断是否接收到用户同意进行情绪引导的操作所对应的指令;引导单元,用于若接收到所述用户同意进行情绪引导的操作所对应的指令,按照预设情绪引导方式引导用户进行情绪调节。In one embodiment, the device 400 for emotion recognition based on the neural network model further includes: a first judging unit, configured to judge whether the emotional state index is less than or equal to a preset emotional state index threshold; and a prompt unit, configured to: The emotional state index is less than or equal to the preset emotional state index threshold, and a preset emotional guidance prompt is output; a second judgment unit is used to judge whether an instruction corresponding to the user agrees to perform an emotional guidance operation is received; the guidance unit, It is used for guiding the user to adjust emotions according to the preset emotion guidance mode if the instruction corresponding to the operation that the user agrees to perform the emotion guidance is received.
在一个实施例中,所述引导单元包括:第一获取子单元,用于获取所述情绪状态指数;第二获取子单元,用于根据所述情绪状态指数,获取所述情绪状态指数所对应的预设类型疗愈音乐;引导子单元,用于将所述预设类型疗愈音乐与预设正念训练相结合以引导用户进行情绪调节。In one embodiment, the guiding unit includes: a first obtaining subunit for obtaining the emotional state index; a second obtaining subunit for obtaining the emotional state index corresponding to the emotional state index The preset type of healing music; the guiding subunit is used to combine the preset type of healing music with preset mindfulness training to guide the user to adjust emotions.
需要说明的是,所属领域的技术人员可以清楚地了解到,上述基于神经网络模型的情绪识别装置和各单元的具体实现过程,可以参考前述方法实施例中的相应描述,为了描述的方便和简洁,在此不再赘述。It should be noted that those skilled in the art can clearly understand that the above-mentioned neural network model-based emotion recognition device and the specific implementation process of each unit can refer to the corresponding description in the foregoing method embodiment, for the convenience and conciseness of the description. , I won’t repeat it here.
同时,上述基于神经网络模型的情绪识别装置中各个单元的划分和连接方式仅用于举例说明,在其他实施例中,可将基于神经网络模型的情绪识别装置按照需要划分为不同的单元,也可将基于神经网络模型的情绪识别装置中各单元采取不同的连接顺序和方式,以完成上述基于神经网络模型的情绪识别装置的全部或部分功能。At the same time, the division and connection of the various units in the emotion recognition device based on the neural network model are only used for illustration. In other embodiments, the emotion recognition device based on the neural network model can be divided into different units as needed. The units in the emotion recognition device based on the neural network model can be connected in different order and manners to complete all or part of the functions of the emotion recognition device based on the neural network model.
上述基于神经网络模型的情绪识别装置可以实现为一种计算机程序的形式,该计算机程序可以在如图5所示的计算机设备上运行。The above-mentioned emotion recognition apparatus based on the neural network model can be implemented in the form of a computer program, and the computer program can be run on the computer device shown in FIG. 5.
请参阅图5,图5是本申请实施例提供的一种计算机设备的示意性框图。该计算机设备500可以是台式机电脑或者服务器等计算机设备,也可以是其他设备中的组件或者部件。Please refer to FIG. 5, which is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a computer device such as a desktop computer or a server, or may be a component or component in other devices.
参阅图5,该计算机设备500包括通过系统总线501连接的处理器502、存储器和网络接口505,其中,存储器可以包括非易失性存储介质503和内存储器504,存储器也可以包括易失性存储介质。Referring to FIG. 5, the computer device 500 includes a processor 502, a memory, and a network interface 505 connected through a system bus 501. The memory may include a non-volatile storage medium 503 and an internal memory 504, and the memory may also include volatile storage. medium.
该非易失性存储介质503可存储操作系统5031和计算机程序5032。该计算机程序5032被执行时,可使得处理器502执行一种上述基于神经网络模型的情绪识别方法。The non-volatile storage medium 503 can store an operating system 5031 and a computer program 5032. When the computer program 5032 is executed, the processor 502 can execute the aforementioned emotion recognition method based on the neural network model.
该处理器502用于提供计算和控制能力,以支撑整个计算机设备500的运行。The processor 502 is used to provide calculation and control capabilities to support the operation of the entire computer device 500.
该内存储器504为非易失性存储介质503中的计算机程序5032的运行提供环境,该计算机程序5032被处理器502执行时,可使得处理器502执行一种上述基于神经网络模型的情绪识别方法。The internal memory 504 provides an environment for the operation of the computer program 5032 in the non-volatile storage medium 503. When the computer program 5032 is executed by the processor 502, the processor 502 can make the processor 502 execute the aforementioned neural network model-based emotion recognition method. .
该网络接口505用于与其它设备进行网络通信。本领域技术人员可以理解,图5中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备500的限定,具体的计算机设备500可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。例如,在一些实施例中,计算机设备可以仅包括存储器及处理器,在这样的实施例中,存储器及处理器的结构及功能与图5所示实施例一致,在此不再赘述。The network interface 505 is used for network communication with other devices. Those skilled in the art can understand that the structure shown in FIG. 5 is only a block diagram of part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device 500 to which the solution of the present application is applied. The specific computer device 500 may include more or fewer components than shown in the figure, or combine certain components, or have a different component arrangement. For example, in some embodiments, the computer device may only include a memory and a processor. In such embodiments, the structures and functions of the memory and the processor are the same as those of the embodiment shown in FIG. 5, which will not be repeated here.
其中,所述处理器502用于运行存储在存储器中的计算机程序5032,以实现本申请实施 例所描述的基于神经网络模型的情绪识别方法。Wherein, the processor 502 is configured to run a computer program 5032 stored in a memory to implement the neural network model-based emotion recognition method described in the embodiment of the present application.
应当理解,在本申请实施例中,处理器502可以是中央处理单元(Central ProcessingUnit,CPU),该处理器502还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable GateArray,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。其中,通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。It should be understood that in the embodiment of the present application, the processor 502 may be a central processing unit (Central Processing Unit, CPU), and the processor 502 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSPs), and special purpose processors. Integrated circuit (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc. Among them, the general-purpose processor may be a microprocessor or the processor may also be any conventional processor.
本领域普通技术人员可以理解的是实现上述实施例的方法中的全部或部分流程,是可以通过计算机程序来完成,该计算机程序可存储于一计算机可读存储介质。该计算机程序被该计算机系统中的至少一个处理器执行,以实现上述方法的实施例的流程步骤。A person of ordinary skill in the art can understand that all or part of the processes in the methods of the foregoing embodiments can be implemented by a computer program, and the computer program can be stored in a computer-readable storage medium. The computer program is executed by at least one processor in the computer system to implement the process steps of the foregoing method embodiment.
因此,本申请还提供一种计算机可读存储介质。该计算机可读存储介质可以为非易失性的计算机可读存储介质,也可以为易失性的计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被处理器执行时使处理器执行以上各实施例中所描述的所述基于神经网络模型的情绪识别方法的步骤。Therefore, this application also provides a computer-readable storage medium. The computer-readable storage medium may be a non-volatile computer-readable storage medium, or may be a volatile computer-readable storage medium, the computer-readable storage medium stores a computer program, and the computer program is executed by the processor When the processor executes the steps of the neural network model-based emotion recognition method described in the above embodiments.
所述存储介质为实体的、非瞬时性的存储介质,例如可以是U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、磁碟或者光盘等各种可以存储计算机程序的实体存储介质。The storage medium is a physical, non-transitory storage medium, such as a U disk, a mobile hard disk, a read-only memory (Read-Only Memory, ROM), a magnetic disk, or an optical disk, etc., which can store computer programs. medium.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。A person of ordinary skill in the art may be aware that the units and algorithm steps of the examples described in the embodiments disclosed herein can be implemented by electronic hardware, computer software, or a combination of both, in order to clearly illustrate the hardware and software Interchangeability, in the above description, the composition and steps of each example have been generally described in accordance with the function. Whether these functions are executed by hardware or software depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
以上所述,仅为本申请的具体实施方式,但本申请明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。The above are only specific implementations of this application, but the scope of protection stated in this application is not limited to this. Any person skilled in the art can easily think of various equivalents within the technical scope disclosed in this application. Modifications or replacements, these modifications or replacements shall be covered within the scope of protection of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.

Claims (20)

  1. 一种基于神经网络模型的情绪识别方法,包括:A method of emotion recognition based on neural network model, including:
    获取若干张手指指尖所对应的指尖图像;Acquire several fingertip images corresponding to the fingertips of the fingers;
    将所述指尖图像所对应的格式进行转化,以得到所述指尖图像所对应的PPG信号;Converting the format corresponding to the fingertip image to obtain the PPG signal corresponding to the fingertip image;
    通过预设神经网络模型从所述PPG信号中提取心率特征和心率变异特征;Extracting heart rate features and heart rate variability features from the PPG signal through a preset neural network model;
    根据所述心率特征和所述心率变异特征,获取所述PPG信号所对应的情绪状态指数;Obtaining the emotional state index corresponding to the PPG signal according to the heart rate feature and the heart rate variability feature;
    根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数所对应的情绪状态。According to the emotional state index and the preset matching relationship between the emotional state index and the emotional state, the emotional state corresponding to the emotional state index is obtained.
  2. 根据权利要求1所述基于神经网络模型的情绪识别方法,其中,所述获取若干张手指指尖所对应的指尖图像的步骤包括:The method for emotion recognition based on a neural network model according to claim 1, wherein the step of obtaining fingertip images corresponding to a plurality of fingertips comprises:
    响应于采集指尖视频的指令,打开移动终端的摄像头的闪光灯;In response to the instruction to collect the fingertip video, turn on the flash of the camera of the mobile terminal;
    提示用户通过手指指尖遮挡所述摄像头;Prompt the user to block the camera with a fingertip;
    判断所述摄像头是否被所述手指指尖遮挡;Judging whether the camera is blocked by the fingertip of the finger;
    若所述摄像头被所述手指指尖遮挡,通过所述摄像头对所述手指指尖进行录制,以得到指尖视频;If the camera is blocked by the fingertip of the finger, recording the fingertip of the finger through the camera to obtain a fingertip video;
    对所述指尖视频进行图像提取以得到指尖图像。Image extraction is performed on the fingertip video to obtain a fingertip image.
  3. 根据权利要求1所述基于神经网络模型的情绪识别方法,其中,所述得到所述PPG信号所对应的情绪状态指数的步骤之后,还包括:The method for emotion recognition based on a neural network model according to claim 1, wherein after the step of obtaining the emotional state index corresponding to the PPG signal, the method further comprises:
    获取预设时间段内的所有所述情绪状态指数;Acquiring all the emotional state indexes in a preset time period;
    计算所有所述情绪状态指数所对应的情绪状态指数平均值;Calculating the average value of the emotional state index corresponding to all the emotional state indexes;
    所述根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数所对应的情绪状态的步骤包括:The step of obtaining the emotional state corresponding to the emotional state index according to the emotional state index and the preset matching relationship between the emotional state index and the emotional state includes:
    根据所述情绪状态指数平均值,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数平均值所对应的情绪状态。According to the average value of the emotional state index and the preset matching relationship between the emotional state index and the emotional state, the emotional state corresponding to the average value of the emotional state index is obtained.
  4. 根据权利要求3所述基于神经网络模型的情绪识别方法,其中,所述获取所述情绪状态指数平均值所对应的情绪状态的步骤之后,还包括:The method for emotion recognition based on a neural network model according to claim 3, wherein after the step of obtaining the emotional state corresponding to the average value of the emotional state index, the method further comprises:
    将所有所述情绪状态指数绘制成情绪趋势示意图;Drawing all the emotional state indexes into emotional trend diagrams;
    将所述情绪趋势示意图和所述情绪状态指数平均值所对应的情绪状态输出以进行情绪状态显示。The emotional state corresponding to the emotional trend diagram and the emotional state index average value are output for emotional state display.
  5. 根据权利要求1所述基于神经网络模型的情绪识别方法,其中,所述获取所述情绪状态指数所对应的情绪状态的步骤之后,还包括:The method for emotion recognition based on a neural network model according to claim 1, wherein after the step of obtaining the emotional state corresponding to the emotional state index, the method further comprises:
    判断所述情绪状态指数是否小于或者等于预设情绪状态指数阈值;Judging whether the emotional state index is less than or equal to a preset emotional state index threshold;
    若所述情绪状态指数小于或者等于所述预设情绪状态指数阈值,输出预设情绪引导提示;If the emotional state index is less than or equal to the preset emotional state index threshold, output a preset emotional guidance prompt;
    判断是否接收到用户同意进行情绪引导的操作所对应的指令;Determine whether an instruction corresponding to the user's consent to perform an emotionally guided operation is received;
    若接收到所述用户同意进行情绪引导的操作所对应的指令,按照预设情绪引导方式引导用户进行情绪调节。If the instruction corresponding to the operation that the user agrees to perform emotional guidance is received, the user is guided to perform emotional adjustment according to a preset emotional guidance manner.
  6. 根据权利要求5所述基于神经网络模型的情绪识别方法,其中,所述按照预设情绪引导方式引导用户进行情绪调节的步骤包括:The method for emotion recognition based on a neural network model according to claim 5, wherein the step of guiding the user to adjust emotion according to a preset emotion guidance method comprises:
    获取所述情绪状态指数;Obtaining the emotional state index;
    根据所述情绪状态指数,获取所述情绪状态指数所对应的预设类型疗愈音乐;Obtaining a preset type of healing music corresponding to the emotional state index according to the emotional state index;
    将所述预设类型疗愈音乐与预设正念训练相结合以引导用户进行情绪调节。The preset type of healing music is combined with preset mindfulness training to guide the user to adjust emotions.
  7. 根据权利要求1所述基于神经网络模型的情绪识别方法,其中,所述将所述指尖图像所对应的格式进行转化,以得到所述指尖图像所对应的PPG信号的步骤包括:The method for emotion recognition based on a neural network model according to claim 1, wherein the step of converting the format corresponding to the fingertip image to obtain the PPG signal corresponding to the fingertip image comprises:
    根据预设分离方法对每帧所述指尖图像进行RGB分类,以得到每帧所述指尖图像所对应的RGB三通道;Performing RGB classification on each frame of the fingertip image according to a preset separation method to obtain the RGB three channels corresponding to each frame of the fingertip image;
    从所述RGB三通道中分离出红色通道;Separating the red channel from the three RGB channels;
    对所述红色通道进行计算,以提取出红色通道所对应的PPG信号;Calculating the red channel to extract the PPG signal corresponding to the red channel;
    将所有所述红色通道所对应的PPG信号进行组合,以得到所述指尖图像所对应的PPG信号。Combine all the PPG signals corresponding to the red channel to obtain the PPG signal corresponding to the fingertip image.
  8. 一种基于神经网络模型的情绪识别装置,包括:An emotion recognition device based on a neural network model, including:
    第一获取单元,用于获取若干张手指指尖所对应的指尖图像;The first acquiring unit is configured to acquire a plurality of fingertip images corresponding to the fingertips of the fingers;
    转化单元,用于将所述指尖图像所对应的格式进行转化,以得到所述指尖图像所对应的PPG信号;A conversion unit for converting the format corresponding to the fingertip image to obtain the PPG signal corresponding to the fingertip image;
    提取单元,用于通过预设神经网络模型从所述PPG信号中提取心率特征和心率变异特征;An extraction unit, configured to extract heart rate characteristics and heart rate variability characteristics from the PPG signal through a preset neural network model;
    第二获取单元,用于根据所述心率特征和所述心率变异特征,获取所述PPG信号所对应的情绪状态指数;The second acquiring unit is configured to acquire the emotional state index corresponding to the PPG signal according to the heart rate characteristic and the heart rate variability characteristic;
    第三获取单元,用于根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数所对应的情绪状态。The third acquiring unit is configured to acquire the emotional state corresponding to the emotional state index according to the emotional state index and the preset matching relationship between the emotional state index and the emotional state.
  9. 一种计算机设备,所述计算机设备包括存储器以及与所述存储器相连的处理器;所述存储器用于存储计算机程序;所述处理器用于运行所述计算机程序,以执行如下步骤:A computer device includes a memory and a processor connected to the memory; the memory is used to store a computer program; the processor is used to run the computer program to perform the following steps:
    获取若干张手指指尖所对应的指尖图像;Acquire several fingertip images corresponding to the fingertips of the fingers;
    将所述指尖图像所对应的格式进行转化,以得到所述指尖图像所对应的PPG信号;Converting the format corresponding to the fingertip image to obtain the PPG signal corresponding to the fingertip image;
    通过预设神经网络模型从所述PPG信号中提取心率特征和心率变异特征;Extracting heart rate features and heart rate variability features from the PPG signal through a preset neural network model;
    根据所述心率特征和所述心率变异特征,获取所述PPG信号所对应的情绪状态指数;Obtaining the emotional state index corresponding to the PPG signal according to the heart rate feature and the heart rate variability feature;
    根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数所对应的情绪状态。According to the emotional state index and the preset matching relationship between the emotional state index and the emotional state, the emotional state corresponding to the emotional state index is obtained.
  10. 根据权利要求9所述计算机设备,其中,所述获取若干张手指指尖所对应的指尖图像的步骤包括:9. The computer device according to claim 9, wherein the step of acquiring a plurality of fingertip images corresponding to the fingertips comprises:
    响应于采集指尖视频的指令,打开移动终端的摄像头的闪光灯;In response to the instruction to collect the fingertip video, turn on the flash of the camera of the mobile terminal;
    提示用户通过手指指尖遮挡所述摄像头;Prompt the user to block the camera with a fingertip;
    判断所述摄像头是否被所述手指指尖遮挡;Judging whether the camera is blocked by the fingertip of the finger;
    若所述摄像头被所述手指指尖遮挡,通过所述摄像头对所述手指指尖进行录制,以得到指尖视频;If the camera is blocked by the fingertip of the finger, recording the fingertip of the finger through the camera to obtain a fingertip video;
    对所述指尖视频进行图像提取以得到指尖图像。Image extraction is performed on the fingertip video to obtain a fingertip image.
  11. 根据权利要求9所述计算机设备,其中,所述得到所述PPG信号所对应的情绪状态指数的步骤之后,还包括:The computer device according to claim 9, wherein after the step of obtaining the emotional state index corresponding to the PPG signal, the method further comprises:
    获取预设时间段内的所有所述情绪状态指数;Acquiring all the emotional state indexes in a preset time period;
    计算所有所述情绪状态指数所对应的情绪状态指数平均值;Calculating the average value of the emotional state index corresponding to all the emotional state indexes;
    所述根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数所对应的情绪状态的步骤包括:The step of obtaining the emotional state corresponding to the emotional state index according to the emotional state index and the preset matching relationship between the emotional state index and the emotional state includes:
    根据所述情绪状态指数平均值,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数平均值所对应的情绪状态。According to the average value of the emotional state index and the preset matching relationship between the emotional state index and the emotional state, the emotional state corresponding to the average value of the emotional state index is obtained.
  12. 根据权利要求11所述计算机设备,其中,所述获取所述情绪状态指数平均值所对应的情绪状态的步骤之后,还包括:11. The computer device according to claim 11, wherein after the step of obtaining the emotional state corresponding to the average value of the emotional state index, the method further comprises:
    将所有所述情绪状态指数绘制成情绪趋势示意图;Drawing all the emotional state indexes into emotional trend diagrams;
    将所述情绪趋势示意图和所述情绪状态指数平均值所对应的情绪状态输出以进行情绪状态显示。The emotional state corresponding to the emotional trend diagram and the emotional state index average value are output for emotional state display.
  13. 根据权利要求9所述计算机设备,其中,所述获取所述情绪状态指数所对应的情绪状态的步骤之后,还包括:The computer device according to claim 9, wherein after the step of obtaining the emotional state corresponding to the emotional state index, the method further comprises:
    判断所述情绪状态指数是否小于或者等于预设情绪状态指数阈值;Judging whether the emotional state index is less than or equal to a preset emotional state index threshold;
    若所述情绪状态指数小于或者等于所述预设情绪状态指数阈值,输出预设情绪引导提示;If the emotional state index is less than or equal to the preset emotional state index threshold, output a preset emotional guidance prompt;
    判断是否接收到用户同意进行情绪引导的操作所对应的指令;Determine whether an instruction corresponding to the user's consent to perform an emotionally guided operation is received;
    若接收到所述用户同意进行情绪引导的操作所对应的指令,按照预设情绪引导方式引导用户进行情绪调节。If the instruction corresponding to the operation that the user agrees to perform emotional guidance is received, the user is guided to perform emotional adjustment according to a preset emotional guidance manner.
  14. 根据权利要求13所述计算机设备,其中,所述按照预设情绪引导方式引导用户进行情绪调节的步骤包括:The computer device according to claim 13, wherein the step of guiding the user to adjust emotions according to a preset emotion guidance method comprises:
    获取所述情绪状态指数;Obtaining the emotional state index;
    根据所述情绪状态指数,获取所述情绪状态指数所对应的预设类型疗愈音乐;Obtaining a preset type of healing music corresponding to the emotional state index according to the emotional state index;
    将所述预设类型疗愈音乐与预设正念训练相结合以引导用户进行情绪调节。The preset type of healing music is combined with preset mindfulness training to guide the user to adjust emotions.
  15. 根据权利要求9所述计算机设备,其中,所述将所述指尖图像所对应的格式进行转化,以得到所述指尖图像所对应的PPG信号的步骤包括:9. The computer device according to claim 9, wherein the step of converting the format corresponding to the fingertip image to obtain the PPG signal corresponding to the fingertip image comprises:
    根据预设分离方法对每帧所述指尖图像进行RGB分类,以得到每帧所述指尖图像所对应的RGB三通道;Performing RGB classification on each frame of the fingertip image according to a preset separation method to obtain the RGB three channels corresponding to each frame of the fingertip image;
    从所述RGB三通道中分离出红色通道;Separating the red channel from the three RGB channels;
    对所述红色通道进行计算,以提取出红色通道所对应的PPG信号;Calculating the red channel to extract the PPG signal corresponding to the red channel;
    将所有所述红色通道所对应的PPG信号进行组合,以得到所述指尖图像所对应的PPG信号。Combine all the PPG signals corresponding to the red channel to obtain the PPG signal corresponding to the fingertip image.
  16. 一种计算机可读存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时可实现如下步骤:A computer-readable storage medium that stores a computer program, and when the computer program is executed by a processor, the following steps can be implemented:
    获取若干张手指指尖所对应的指尖图像;Acquire several fingertip images corresponding to the fingertips of the fingers;
    将所述指尖图像所对应的格式进行转化,以得到所述指尖图像所对应的PPG信号;Converting the format corresponding to the fingertip image to obtain the PPG signal corresponding to the fingertip image;
    通过预设神经网络模型从所述PPG信号中提取心率特征和心率变异特征;Extracting heart rate features and heart rate variability features from the PPG signal through a preset neural network model;
    根据所述心率特征和所述心率变异特征,获取所述PPG信号所对应的情绪状态指数;Obtaining the emotional state index corresponding to the PPG signal according to the heart rate feature and the heart rate variability feature;
    根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数所对应的情绪状态。According to the emotional state index and the preset matching relationship between the emotional state index and the emotional state, the emotional state corresponding to the emotional state index is obtained.
  17. 根据权利要求16所述计算机可读存储介质,其中,所述获取若干张手指指尖所对应的指尖图像的步骤包括:15. The computer-readable storage medium according to claim 16, wherein the step of acquiring a plurality of fingertip images corresponding to the fingertips comprises:
    响应于采集指尖视频的指令,打开移动终端的摄像头的闪光灯;In response to the instruction to collect the fingertip video, turn on the flash of the camera of the mobile terminal;
    提示用户通过手指指尖遮挡所述摄像头;Prompt the user to block the camera with a fingertip;
    判断所述摄像头是否被所述手指指尖遮挡;Judging whether the camera is blocked by the fingertip of the finger;
    若所述摄像头被所述手指指尖遮挡,通过所述摄像头对所述手指指尖进行录制,以得到指尖视频;If the camera is blocked by the fingertip of the finger, recording the fingertip of the finger through the camera to obtain a fingertip video;
    对所述指尖视频进行图像提取以得到指尖图像。Image extraction is performed on the fingertip video to obtain a fingertip image.
  18. 根据权利要求16所述计算机可读存储介质,其中,所述得到所述PPG信号所对应的情绪状态指数的步骤之后,还包括:15. The computer-readable storage medium according to claim 16, wherein after the step of obtaining the emotional state index corresponding to the PPG signal, the method further comprises:
    获取预设时间段内的所有所述情绪状态指数;Acquiring all the emotional state indexes in a preset time period;
    计算所有所述情绪状态指数所对应的情绪状态指数平均值;Calculating the average value of the emotional state index corresponding to all the emotional state indexes;
    所述根据所述情绪状态指数,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数所对应的情绪状态的步骤包括:The step of obtaining the emotional state corresponding to the emotional state index according to the emotional state index and the preset matching relationship between the emotional state index and the emotional state includes:
    根据所述情绪状态指数平均值,及情绪状态指数与情绪状态之间的预设匹配关系,获取所述情绪状态指数平均值所对应的情绪状态。According to the average value of the emotional state index and the preset matching relationship between the emotional state index and the emotional state, the emotional state corresponding to the average value of the emotional state index is obtained.
  19. 根据权利要求18所述计算机可读存储介质,其中,所述获取所述情绪状态指数平均值所对应的情绪状态的步骤之后,还包括:18. The computer-readable storage medium according to claim 18, wherein after the step of obtaining the emotional state corresponding to the average value of the emotional state index, the method further comprises:
    将所有所述情绪状态指数绘制成情绪趋势示意图;Drawing all the emotional state indexes into emotional trend diagrams;
    将所述情绪趋势示意图和所述情绪状态指数平均值所对应的情绪状态输出以进行情绪状态显示。The emotional state corresponding to the emotional trend diagram and the emotional state index average value are output for emotional state display.
  20. 根据权利要求16所述计算机可读存储介质,其中,所述获取所述情绪状态指数所对应的情绪状态的步骤之后,还包括:15. The computer-readable storage medium according to claim 16, wherein after the step of obtaining the emotional state corresponding to the emotional state index, the method further comprises:
    判断所述情绪状态指数是否小于或者等于预设情绪状态指数阈值;Judging whether the emotional state index is less than or equal to a preset emotional state index threshold;
    若所述情绪状态指数小于或者等于所述预设情绪状态指数阈值,输出预设情绪引导提示;If the emotional state index is less than or equal to the preset emotional state index threshold, output a preset emotional guidance prompt;
    判断是否接收到用户同意进行情绪引导的操作所对应的指令;Determine whether an instruction corresponding to the user's consent to perform an emotionally guided operation is received;
    若接收到所述用户同意进行情绪引导的操作所对应的指令,按照预设情绪引导方式引导用户进行情绪调节。If the instruction corresponding to the operation that the user agrees to perform emotional guidance is received, the user is guided to perform emotional adjustment according to a preset emotional guidance manner.
PCT/CN2020/122418 2020-07-30 2020-10-21 Emotion recognition method, apparatus, computer device, and computer-readable storage medium WO2021139310A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010752926.1 2020-07-30
CN202010752926.1A CN111797817B (en) 2020-07-30 2020-07-30 Emotion recognition method, emotion recognition device, computer equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2021139310A1 true WO2021139310A1 (en) 2021-07-15

Family

ID=72828073

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/122418 WO2021139310A1 (en) 2020-07-30 2020-10-21 Emotion recognition method, apparatus, computer device, and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN111797817B (en)
WO (1) WO2021139310A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115886815A (en) * 2022-11-10 2023-04-04 研祥智慧物联科技有限公司 Emotional pressure monitoring method and device and intelligent wearable device

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797817B (en) * 2020-07-30 2024-04-19 平安科技(深圳)有限公司 Emotion recognition method, emotion recognition device, computer equipment and computer readable storage medium
CN112370057A (en) * 2020-11-09 2021-02-19 平安科技(深圳)有限公司 Pressure evaluation method and device, computer equipment and storage medium
CN112364329A (en) * 2020-12-09 2021-02-12 山西三友和智慧信息技术股份有限公司 Face authentication system and method combining heart rate detection
CN112716469B (en) * 2020-12-29 2022-07-19 厦门大学 Real-time heart rate extraction method and device based on fingertip video
CN113842145B (en) * 2021-10-11 2023-10-03 北京工业大学 Method, device and system for calculating emotion index based on pupil wave
CN114241719B (en) * 2021-12-03 2023-10-31 广州宏途数字科技有限公司 Visual fatigue state monitoring method, device and storage medium in student learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180253094A1 (en) * 2013-03-27 2018-09-06 Pixart Imaging Inc. Safety monitoring apparatus and method thereof for human-driven vehicle
CN109793509A (en) * 2019-03-15 2019-05-24 北京科技大学 A kind of nuclear radiation detection and method for measuring heart rate and device
CN109993068A (en) * 2019-03-11 2019-07-09 华南理工大学 A kind of contactless human emotion's recognition methods based on heart rate and facial characteristics
CN111797817A (en) * 2020-07-30 2020-10-20 平安科技(深圳)有限公司 Emotion recognition method and device, computer equipment and computer-readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10517521B2 (en) * 2010-06-07 2019-12-31 Affectiva, Inc. Mental state mood analysis using heart rate collection based on video imagery
CN109846496B (en) * 2017-11-30 2022-06-10 昆山光微电子有限公司 Hardware implementation method and combination of emotion perception function of intelligent wearable device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180253094A1 (en) * 2013-03-27 2018-09-06 Pixart Imaging Inc. Safety monitoring apparatus and method thereof for human-driven vehicle
CN109993068A (en) * 2019-03-11 2019-07-09 华南理工大学 A kind of contactless human emotion's recognition methods based on heart rate and facial characteristics
CN109793509A (en) * 2019-03-15 2019-05-24 北京科技大学 A kind of nuclear radiation detection and method for measuring heart rate and device
CN111797817A (en) * 2020-07-30 2020-10-20 平安科技(深圳)有限公司 Emotion recognition method and device, computer equipment and computer-readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115886815A (en) * 2022-11-10 2023-04-04 研祥智慧物联科技有限公司 Emotional pressure monitoring method and device and intelligent wearable device

Also Published As

Publication number Publication date
CN111797817B (en) 2024-04-19
CN111797817A (en) 2020-10-20

Similar Documents

Publication Publication Date Title
WO2021139310A1 (en) Emotion recognition method, apparatus, computer device, and computer-readable storage medium
CN107920763B (en) Processing biological data
AU2016333816B2 (en) Assessment of a pulmonary condition by speech analysis
US10004410B2 (en) System and methods for measuring physiological parameters
CN107427267B (en) Method and apparatus for deriving mental state of subject
US20200146567A1 (en) Pulse wave device and method of discriminating and quantifying fatigue
US10716501B2 (en) System and method for classification and quantitative estimation of cognitive stress
US20190059824A1 (en) Lung-sound signal processing method, processing device, and readable storage medium
WO2020006845A1 (en) Health reserve evaluation method, and device and application thereof
EP2874539A1 (en) A method and system for determining the state of a person
KR20220013559A (en) System for monitoring physiological parameters
WO2021164350A1 (en) Method and device for generating photoplethysmography signal
JP6831382B2 (en) A method for evaluating the reliability of blood pressure measurement and a device for implementing it
Leitner et al. Personalized blood pressure estimation using photoplethysmography and wavelet decomposition
US20240008784A1 (en) System and Method for Prevention, Diagnosis, and Treatment of Health Conditions
US20240090807A1 (en) Wearable device and method for stress detection, emotion recognition and emotion management
US11653847B2 (en) Method and apparatus for hypertension classification
Lee et al. Video-based bio-signal measurements for a mobile healthcare system
CN112294271A (en) Monitor and irregular pulse rate identification method thereof
KR20200140469A (en) Method for measuringing skin and health indicators of an user using a video image and apparatus using the same
Tabei Novel smartphone-based photoplethysmogram signal analysis for health monitoring applications
Ide et al. Workplace stress estimation method based on multivariate analysis of physiological indices
TWI783904B (en) Non-contact physiological signal measuring method and device thereof
Ankishan et al. A new system for cuffless blood pressure measurement
US20240008813A1 (en) Smart wearable device and method for estimating traditional medicine system parameters

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20912062

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20912062

Country of ref document: EP

Kind code of ref document: A1