CN115120236A - Emotion recognition method and device, wearable device and storage medium - Google Patents

Emotion recognition method and device, wearable device and storage medium Download PDF

Info

Publication number
CN115120236A
CN115120236A CN202210463142.6A CN202210463142A CN115120236A CN 115120236 A CN115120236 A CN 115120236A CN 202210463142 A CN202210463142 A CN 202210463142A CN 115120236 A CN115120236 A CN 115120236A
Authority
CN
China
Prior art keywords
state
user
motion state
variability
emotional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210463142.6A
Other languages
Chinese (zh)
Inventor
梁杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN202210463142.6A priority Critical patent/CN115120236A/en
Publication of CN115120236A publication Critical patent/CN115120236A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1123Discriminating type of movement, e.g. walking or running
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems

Abstract

The embodiment of the application discloses an emotion recognition method, an emotion recognition device, wearable equipment and a storage medium, wherein the method comprises the following steps: acquiring vital sign data corresponding to a user at a first moment, and acquiring a first motion state corresponding to the user at a second moment; the second time is earlier than the first time; the first motion state is determined according to the user motion information corresponding to the user at the second moment; extracting a first variability feature from the vital sign data; a first emotional state is obtained from the first variability feature and the first motion state at one or more points in time during the detection period. By implementing the embodiment of the application, the accuracy of emotion state recognition can be improved.

Description

Emotion recognition method and device, wearable device and storage medium
Technical Field
The application relates to the technical field of wearable equipment, in particular to an emotion recognition method and device, wearable equipment and a storage medium.
Background
Mood is an important sign of physical and mental health. The positive emotion can improve the physiological function of the human body, stimulate people to make efforts, and form positive and upward living attitude; negative emotions can weaken the physical strength and energy of people and inhibit the activity of people. At present, an increase in living pressure leads to negative emotions in many people. If the negative emotions cannot be processed and adjusted in time, psychological diseases may be caused. It is therefore desirable to provide a method of identifying emotional states so that people can take timely measures against negative emotions. The existing technology is mainly an emotion prediction model based on a stress index, but the method cannot comprehensively and accurately reflect the emotional state of the user.
Disclosure of Invention
The embodiment of the application discloses an emotion recognition method and device, wearable equipment and a storage medium, and the accuracy of emotion state recognition can be improved.
The embodiment of the application discloses an emotion recognition method, which is characterized by being applied to wearable equipment, and the method comprises the following steps:
acquiring vital sign data corresponding to a user at a first moment, and acquiring a first motion state corresponding to the user at a second moment; the second time is earlier than the first time; the first motion state is determined according to user motion information corresponding to the user at the second moment;
extracting a first variability feature from the vital sign data;
obtaining a first emotional state corresponding to one or more time points during a detection time period from the first variability feature and the first motion state.
As an alternative embodiment, the vital sign data comprises a heart rate signal and a respiration signal; said extracting a first variability feature from said vital sign data, comprising:
extracting a first set of respiratory variability features from the respiratory signal by interpreting the respiratory signal; the first set of respiratory variability features comprises one or more respiratory statistical features;
extracting a first set of heart rate variability features from the heart rate signal by parsing the heart rate signal; the first set of heart rate variability features comprises at least one of: heart rate statistics in the time domain, frequency domain features in the frequency domain, and non-linear features in the non-linear domain.
As an alternative embodiment, said obtaining a first emotional state corresponding to one or more points in time during a detection period from said first variability feature and said first motor state comprises:
processing the first variability feature and the first motion state through a classification model to obtain a first emotional state output by the classification model corresponding to one or more time points within the detection time period.
As an optional embodiment, prior to said processing of said first variability feature and said first motion state by a classification model, said method further comprises:
converting the first motion state into a numerical characteristic to obtain a second motion state represented by the numerical characteristic;
said processing said first variability feature and said first motion state through a classification model to obtain a first emotional state output by said classification model corresponding to one or more time points within said detection period, comprising:
processing the first variability feature and the second motion state represented by the numerical feature through a classification model to obtain a first emotional state corresponding to one or more time points within the detection time period output by the classification model.
As an alternative embodiment, the second motion state represented by the numerical characteristic is represented by a plurality of first numerical values in a gaussian distribution; said processing said first variability feature and said second motion state represented by numerical features through a classification model, comprising:
normalizing the plurality of first values and/or the first variability features included in the second motion state to obtain a third motion state and a second variability feature with consistent value ranges; the third motion state is represented by a plurality of second numerical values in a gaussian distribution;
and processing the third motion state and the second variability characteristic with consistent value ranges through a classification model.
As an optional embodiment, after the obtaining the first emotional state corresponding to the plurality of time points within the detection time period, the method further comprises:
calculating transition probabilities corresponding to the first emotion states corresponding to the multiple time points through a filtering model; the transition probabilities are used to indicate the similarity between the respective first emotional states;
determining the first emotional state with the transition probability lower than a first threshold value as a mutation emotional state in the first emotional states corresponding to the multiple time points through the filtering model;
and filtering the mutation emotional state in the detection time period through the filtering model so as to output a second emotional state corresponding to other time points except the time point corresponding to the mutation emotional state in the detection time period.
As an optional embodiment, the acquiring vital sign data acquired by the wearable device at the first time includes:
detecting an activity state of a user;
when the active state is a resting state, acquiring vital sign data acquired by the wearable device at a first moment.
As an optional implementation, after the determining the activity status according to the activity intensity, the method further comprises:
and when the active state is the motion state, re-detecting the active state of the user until the active state is the rest state.
The embodiment of the application discloses emotion recognition device, the device includes:
the acquisition module is used for acquiring vital sign data corresponding to a user at a first moment and acquiring a first motion state corresponding to the user at a second moment; the second time is earlier than the first time; the first motion state is determined according to user motion information corresponding to the user at the second moment;
the feature module is used for extracting a first variability feature from the vital sign data;
a state module to obtain a first emotional state corresponding to one or more time points within a detection time period from the first variability feature and the first motion state.
The embodiment of the application discloses a wearable device, which comprises a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor is enabled to realize any emotion recognition method disclosed by the embodiment of the application.
The embodiment of the application discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute any emotion recognition method disclosed in the embodiment of the application.
Compared with the related art, the embodiment of the application has the following beneficial effects:
obtaining vital sign data of a user at a first moment, and obtaining a first motion state of the user at a second moment earlier than the first moment; extracting a first variability feature from the vital sign data; a first emotional state is obtained from the first variability feature and the first motion state at one or more points in time during the detection period. According to the emotion recognition method and device, the emotion recognition is performed on the user by using the physiological data of various time asynchronizations such as the vital sign data, the motion state and the like, and the accuracy of emotion state recognition can be improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart of a method for emotion recognition disclosed in an embodiment of the present application;
FIG. 2 is a schematic flow chart of another emotion recognition method disclosed in an embodiment of the present application;
FIG. 3 is a flow chart of another emotion recognition method disclosed in an embodiment of the present application;
FIG. 4 is a flow chart of another emotion recognition method disclosed in the embodiments of the present application;
FIG. 5 is a flow chart of another emotion recognition method disclosed in an embodiment of the present application;
fig. 6 is a schematic structural diagram of an emotion recognition apparatus disclosed in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a wearable device disclosed in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the examples and figures of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses an emotion recognition method and device, wearable equipment and a storage medium, and the accuracy of emotion state recognition can be improved. The following are detailed below.
The method disclosed by the embodiment of the application is suitable for wearable equipment. The wearable device can be worn directly on the user or integrated into a portable electronic device in the user's clothing or accessories. The wearable device is not only a hardware device, but also can realize powerful intelligent functions such as computing function and the like through software support, data interaction and cloud interaction. The wearable device can also be connected to various terminals to perform information interaction with the various terminals, wherein the terminals can be terminals such as a mobile phone, a tablet computer or wearable devices which are connected or bound with the wearable device. For example, the user of wearable equipment can be children, and the end user who is connected or binds with wearable equipment can be parents or guardians for children, can make parents or guardians can in time monitor children's physique level through above-mentioned mode, knows children's health status.
The wearable device may include, but is not limited to, a smart watch or a smart bracelet supported by a wrist, a smart ring supported by a finger, a smart shoe or a smart sock supported by a foot, a smart glasses supported by a head, a smart helmet or a smart headband, and various non-mainstream products such as smart clothing, a bag, a crutch, and accessories, which are not limited in the embodiments of the present application.
Referring to fig. 1, fig. 1 is a schematic flowchart of an emotion recognition method disclosed in an embodiment of the present application. As shown in fig. 1, the emotion recognition method may include the steps of:
101. the method comprises the steps of obtaining vital sign data corresponding to a user at a first moment, and obtaining a first motion state corresponding to the user at a second moment.
The first motion state is determined according to the user motion information corresponding to the user at the second moment.
The user motion information may include acceleration data of the user acquired by an acceleration sensor in the wearable device, or angular velocity data of the user acquired by a gyroscope sensor in the wearable device, and may further include various user motion data acquired by various sensors such as the acceleration sensor and the gyroscope sensor.
The wearable device may determine a first motion state from the user motion information. Optionally, the user motion information may be input into a classifier, and the classifier may be any one of a Support Vector Machine (SVM), a bayesian network, a softmax classifier, and the like, and is not limited specifically. After the user motion information is input to the classifier, a first motion state may be output by the classifier. The first motion state may include jogging, sprinting, walking, swimming, riding, standing, etc. motion states.
The vital sign data may include a respiration signal, a heart rate signal, body temperature information, blood pressure information, and the like, and is not limited in particular. Wherein, the wearable device can gather user's breathing signal and heart rate signal through photoelectric sensor. Illustratively, the wearable device separates a photoplethysmography (PPG) signal of the user acquired by a photosensor by a signal separation technique, resulting in a respiration signal and a heart rate signal. The respiration signal can be a real-time respiration rate or a respiration rate in a resting state; the heart rate signal may be a real-time heart rate of the user or a heart rate of the user at rest. A resting state is a physiological state that may refer to a user not performing motion.
The second time is earlier than the first time. The respiratory signal and the heart rate signal of the user collected at the first moment are real-time information, and the user motion information collected at the second moment is historical information. For example, the first time may be consistent with the current beijing time, such as the current time being 2 pm of the beijing time, the first time being also 2 pm, and the second time being 1 pm.
The user motion information collected at the second time may refer to the user motion information at the second time, or may refer to the user motion information accumulated in a time period before the first time, for example, the user motion information in 6 hours before the first time, which is not limited specifically.
The first motion state is introduced because the motion state of the person has an influence on the mood. The appropriate movement can lead the human body to secrete endorphin, improve the pleasure of people and relieve the negative and depressed emotion; excessive exercise may overwhelm the mind and physical strength of the person, but may aggravate feelings of depression and depression, leading to repeated feelings. Therefore, the influence of the motion state on the emotion can be fully considered when emotion recognition is carried out by introducing the first motion state, so that the emotion recognition is more accurate.
102. A first variability feature is extracted from the vital sign data.
The first variability feature may be one or more features extracted from the vital sign data that describe a differential change in the vital sign data over a sampling period. Wherein the vital sign data may include a heart rate signal and a respiration signal; the first variability feature may comprise, but is not limited to, a first set of respiratory variability features, a first heart rate variability feature.
In some embodiments, the first set of respiratory variability features is extracted from the respiratory signal by interpreting the respiratory signal; the first set of respiratory variability features comprises one or more respiratory statistical features; extracting a first set of heart rate variability features from the heart rate signal by analyzing the heart rate signal; the first set of heart rate variability features comprises at least one of the following features: heart rate statistics in the time domain, frequency domain features in the frequency domain, and non-linear features in the non-linear domain.
The first set of respiratory variability features comprises one or more respiratory statistical features. The first set of respiratory variability features may comprise respiratory statistics calculated from variations in frequency, rhythm, waveform of the respiratory signal. The respiratory statistical characteristics may include skewness, kurtosis, median, mean energy, etc. of the respiratory signal. Since the respiratory movement of a person can be controlled at will or made voluntarily without consciousness, much information can be mined from the respiratory signal that is indicative of the state of health. When a person's mood is in an anxiety state, breathing becomes jerky; when the emotion of a person is in a calm state, breathing becomes smooth. Therefore, one or more respiratory statistical characteristics are calculated through the respiratory signals to obtain a first respiratory variability characteristic group, the respiratory signals can be accurately converted into numerical characteristics, and the physiological and psychological states of the human can be better reflected so as to identify the emotion.
The first set of heart rate variability features comprises at least one of the following features: heart rate statistics in the time domain, frequency domain features in the frequency domain, and non-linear features in the non-linear domain.
In some alternative embodiments, a pulse signal identification algorithm may be used to derive a peak-to-peak (PPI) interval from a photoplethysmography (PPG) signal as the heart rate signal. The pulse signal recognition algorithm may include a principal component analysis method, a neural network method, a template matching method, and the like.
The heart rate statistics in the time domain may include standard deviation, zero crossing rate, root mean square, average energy, arithmetic mean, etc. of the heart rate signal.
The frequency domain features in the frequency domain may include a low frequency band (LF) reflecting a sympathetic activity indicator, a high frequency band (HF) reflecting a vagal activity indicator, an indicator reflecting a dynamic balance of sympathetic and vagal (LF/HF), and the like.
The nonlinear features in the nonlinear domain include the maximum lyapunov exponent of the heart rate signal, permutation entropy, sample entropy, complexity, etc.
When a person's emotion is in an angry or sad state, the heartbeat is accelerated; when the emotion of a person is in a peaceful state, the heartbeat is smooth. Thus, by obtaining the first set of heart rate variability features from the heart rate signal, the heart rate signal can be accurately converted into numerical features, which can better reflect the physiological and psychological state of the person for emotion recognition.
103. A first emotional state is obtained from the first variability feature and the first motion state at one or more points in time during the detection period.
The first emotional state may include, but is not limited to, happy, sad, angry, depressed, calm, panic, and the like. The first emotional state can also be divided into two categories, one being a positive emotion and one being a negative emotion. Such as happy, calm belonging to positive mood, depressed, sad belonging to negative mood.
In some embodiments, the wearable device may output a real-time emotion of the user, i.e., the user may view a first emotional state corresponding to a point in time within the detection time period;
optionally, the wearable device may output, through the classification model, the first emotional states corresponding to the multiple time points within the detection time period, so as to filter the sudden change emotional states in the first emotional states corresponding to the respective time points.
The detection period may be a period between the first time and the second time.
In some optional embodiments, the physiological data of the user in different emotional states may be collected, and a mapping curve or a data table may be generated by using the emotional states and the physiological data corresponding to the emotional states. When the emotional state of the user is identified, physiological data of the user may be measured and substituted into the mapping curve or the data table to obtain the emotional state corresponding to the physiological data. In other alternative embodiments, emotion recognition may also be performed by detecting physiological data of the user, such as body temperature, sweat gland secretion, and the like, and combining the physiological data with heart rate, respiratory rate, and the like.
The embodiment of the application introduces the first motion state, so that the influence of the motion state on the emotion is fully considered when emotion recognition is carried out. And the acquisition time of the first motion state is earlier than the acquisition time of the vital sign data, and the emotion can be recognized by using data of different time points or different time periods. Therefore, the emotion recognition method and device for the user utilize various time asynchronous physiological data such as vital sign data and motion state data to recognize the emotion of the user, and accuracy of emotion state recognition can be improved.
Referring to fig. 2, fig. 2 is a schematic flowchart of another emotion recognition method disclosed in the embodiment of the present application. As shown in fig. 2, the method comprises the steps of:
201. the method comprises the steps of obtaining vital sign data corresponding to a user at a first moment, and obtaining a first motion state corresponding to the user at a second moment.
202. A first variability feature is extracted from the vital sign data.
203. And converting the first motion state into a numerical characteristic to obtain a second motion state represented by the numerical characteristic.
The first motion state may include non-numerical data such as jogging, sprinting, walking, swimming, riding, standing, etc. Non-numeric data includes alphabetic, boolean, and the like.
Since the first variability feature comprises numerical data, the first motion state is non-numerical data. The numerical data and the non-numerical data are heterogeneous data (hetereogenous data), and if the numerical data and the non-numerical data are simultaneously input into the classifier, information loss in the non-numerical data is caused.
Therefore, the first motion state can be processed by Unsupervised Feature Transformation (UFT), and the first motion state is converted into a numerical feature while ensuring no information loss. In particular, the first motion state may be converted into a second motion state represented numerically. The second motion state is a continuous numerical type characteristic. Numerical features of 0.75, 1, 1.2, etc. may be used for larger and smaller values. There is no loss of information between the first motion state and the second motion state, which can be measured by the entropy of the feature. In the unsupervised feature conversion process, conversion can be completed without utilizing class labels, and deviation introduced by the class labels can be reduced.
After the first motion state is converted into the second motion state represented by the numerical characteristic through unsupervised characteristic conversion, the numerical characteristic is input into the classifier, so that the non-numerical characteristic can be prevented from being eliminated in the process of inputting the non-numerical characteristic and the numerical characteristic into the classifier.
204. And processing the first variability characteristics and the second motion state represented by the numerical characteristics through the classification model to obtain a first emotion state corresponding to one or more time points in the detection time period output by the classification model.
The classification model may be an SVM classifier, a bayes classifier, or the like, and is not limited specifically. The classification model may be used for emotion recognition.
In some embodiments, the second motion state characterized by a numerical type is represented by a plurality of first values in a gaussian distribution; carrying out standardization processing on a plurality of first values and/or first variability features included in the second motion state to obtain a third motion state and a second variability feature with consistent value ranges; the third motion state is represented by a plurality of second numerical values in a Gaussian distribution; and processing the third motion state and the second variability characteristic with consistent value ranges through the classification model.
Wherein the third motion state is the normalized second motion state; the second value is a normalized value corresponding to the first value. The second variability feature is the normalized first variability feature. The numerical data input into the classification model are standardized, and can be scaled in proportion and fall into a uniform specific interval, so that the speed of the classification model in a learning stage is improved. Optionally, in addition to normalization, normalization and regularization may be performed on data input to the classification model, which is not limited specifically.
In some alternative embodiments, only the second motion state may be normalized, when the second variability feature is the same as the first variability feature.
In further alternative embodiments, only the first variability feature may be normalized, when the second and third motion states are the same; when the first variability feature comprises a first set of respiratory variability features and a first set of heart rate variability features, at least one of the first set of respiratory variability features and the first set of heart rate variability features may be normalized.
205. And processing the third motion state and the second variability characteristic with consistent value ranges through the classification model to obtain a first emotion state corresponding to one or more time points in a detection time period output by the classification model.
The traditional machine learning algorithm collects physiological data synchronously in time, and does not collect data asynchronously in time. In the embodiment of the application, the moment of acquiring the first motion state is earlier than the moment of acquiring the vital sign data, and the identification performance of the classification model is improved by inputting the data with asynchronous time into the classification model. In addition, in order to avoid information loss of non-numerical data caused in the process of simultaneously inputting numerical data and non-numerical data into the classification model, unsupervised feature conversion processing is carried out on the non-numerical data, namely the first motion state is converted into the second motion state represented by the numerical features, the motion state information of the user can be fully utilized to recognize the emotion, and the accuracy of emotion recognition is improved.
Referring to fig. 3, fig. 3 is a schematic flowchart of another emotion recognition method disclosed in the embodiments of the present application.
301. The method comprises the steps of obtaining vital sign data corresponding to a user at a first moment, and obtaining a first motion state corresponding to the user at a second moment.
302. A first variability feature is extracted from the vital sign data.
303. A first emotional state is obtained from the first variability feature and the first motion state at a plurality of time points over the detection time period.
304. And calculating transition probabilities corresponding to the first emotional states corresponding to the multiple time points through a filtering model.
The first emotion states corresponding to the time points in the detection time period can be generated into state sequences according to time sequence and input into the filtering model. The filtering model may filter the mutated emotional states in the sequence of states.
The filtering model may be a statistical model, and the transition probability corresponding to each first emotional state may be calculated by the filtering model. The transition probabilities are used to indicate the degree of similarity between the respective first emotional states. The higher the transition probability of a first emotional state, the higher the similarity between that first emotional state and the respective first emotional state. Alternatively, the filtering model may be a hidden markov chain.
305. And determining the first emotional state with the corresponding transition probability lower than a first threshold value in the first emotional states corresponding to the multiple time points as a mutation emotional state through a filtering model.
The first threshold is a boundary that defines whether the first emotional state is a mutated emotional state. The transition probability corresponding to the first emotional state is lower than a first threshold, indicating that the first emotional state has a low degree of similarity to the respective first emotional state. Illustratively, the first emotional state at time point a is a1, the first emotional state at time point B is B1, and the first emotional state at time point C is C1. If the transition probability corresponding to B1 is lower than the first threshold, it indicates that B1 is different from a1 and C1, but a1 is the same as C1. B1 is a mutant emotional state. The mutant emotional state is different from each of the first emotional states before and after the time point of the mutant emotional state. For example, if the emotional states of a person are all "happy" in a time period, but the emotional state corresponding to the time point appearing in the middle of the time period is "angry", it means that the emotional state of "angry" is a wrong judgment, because the person is almost impossible to have such a sudden emotional state according to the rule.
306. And filtering the mutation emotional state in the detection time period through a filtering model so as to output a second emotional state corresponding to other time points except the time point corresponding to the mutation emotional state in the detection time period.
The second emotional state is an emotional state filtered of the mutated emotional state present in the first emotional state. Filtering the mutated emotional state means deleting a time point corresponding to the mutated emotional state, so that the second emotional state corresponds to a time point other than the time point corresponding to the mutated emotional state in the detection time period.
According to the embodiment of the application, the mutation emotion states existing in the first emotion states output by the classification model are filtered through the filtering model, the probability of misjudgment of emotion is reduced, and the accuracy of emotion recognition is improved.
Referring to fig. 4, fig. 4 is a schematic flowchart of another emotion recognition method disclosed in the embodiments of the present application.
401. An activity state of a user is detected.
The active state can be divided into a resting state and a moving state. The resting state may refer to a physiological state in which the user does not exercise, such as a physiological state of the user in a resting state such as sitting or sleeping; the exercise state may refer to a physiological state of the user while exercising, such as a physiological state of the user in an exercise state of walking, running, or the like.
The target time period may be 5 minutes, and is not particularly limited. The activity state may be characterized by an acceleration signal collected by an acceleration sensor. Illustratively, acceleration signals are acquired from the wearable device over a period of 5 minutes, and a resultant acceleration is calculated. If the resultant acceleration remains above the motion threshold for 80% of the 5 minutes, determining a motion state; otherwise, the state is a rest state.
402. And when the active state is the motion state, re-detecting the active state of the user until the active state is the rest state.
403. When the active state is the resting state, obtaining vital sign data corresponding to the user at a first moment, and obtaining a first motion state corresponding to the user at a second moment.
The vital sign data acquired in the resting state can more accurately reflect the physiological and psychological health conditions of the user, such as a respiration signal and a heart rate signal. The method ensures that the user can obtain the breathing signal and the heart rate signal in the resting state, can eliminate misjudgment of phenomena such as tachypnea, accelerated heartbeat and the like on the health state of the user in the motion state, and can be more concentrated on the influence of emotion on vital sign data in the resting state.
404. A first variability feature is extracted from the vital sign data.
405. A first emotional state is obtained from the first variability feature and the first motion state at one or more points in time during the detection period.
The embodiment of the application can ensure that the user acquires the respiratory signal and the heart rate signal when being in the resting state, is more concentrated on the influence of emotion on the respiratory signal and the heart rate signal, and improves the accuracy of emotion recognition.
Referring to fig. 5, fig. 5 is a schematic flowchart of another emotion recognition method disclosed in the embodiments of the present application. Firstly, an acceleration signal of a user within 5 minutes (min) is detected, and a resultant acceleration is calculated through the acceleration signal. And judging the state of the user by using the resultant acceleration, if the user is in the motion state, returning the process to obtain an acceleration signal within the next 5 minutes (min), and identifying the motion state by using the acceleration signal to obtain a first motion state. If the user is at rest, a PPG signal is acquired within 5 minutes (min), a respiratory signal is separated from the PPG signal, and a peak-to-peak interval PPI is extracted from the pulse rhythm in the PPG signal by template matching. Extracting statistical characteristics, frequency domain characteristics and nonlinear characteristics from the PPI; a set of respiratory variability features is extracted from the respiratory signal. And inputting the first motion state, the statistical characteristic, the frequency domain characteristic, the nonlinear characteristic and the respiratory variability characteristic group into an emotion state recognition SVM classifier, filtering the state output by the SVM classifier through a hidden Markov chain, and finally obtaining the emotion state of the user.
In some optional embodiments, the wearable device may perform information interaction with a connected or bound terminal, such as a mobile phone, a tablet, or a wearable device, for example, a user of the wearable device is a child, and a user of the terminal connected or bound to the wearable device is a parent or a teacher of the child. Wearable equipment can real-time detection children's emotional state for parents or mr can in time know children's emotional state, and adjust school, family's activity in order to deal with negative emotion.
Children in positive emotional states for a long time can develop positive and sunshine character; conversely, a child who is in a negative, depressed emotional state for a long period of time may develop an inward, depressed character trait. The inward and depressed characters are more likely to cause mental diseases of children, thereby affecting physical and psychological health. The embodiment of the application can accurately identify the emotional state of the child, and parents or teachers can know the emotional development condition of the child in the growth process.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an emotion recognition apparatus disclosed in an embodiment of the present application. The device can be applied to wearable equipment, and is not limited specifically. As shown in fig. 6, the emotion recognition apparatus 600 may include: an acquisition module 610, a feature module 620, and a status module 640.
The obtaining module 610 is configured to obtain vital sign data corresponding to a user at a first time, and obtain a first motion state corresponding to the user at a second time; the second time is earlier than the first time; the first motion state is determined according to the user motion information corresponding to the user at the second moment;
a feature module 620, configured to extract a first variability feature from the vital sign data;
a state module 630 for obtaining a first emotional state corresponding to one or more time points within the detection time period from the first variability feature and the first motion state.
In one embodiment, the vital sign data includes a heart rate signal and a respiration signal; feature module 620 may include: a first analysis unit and a second analysis unit;
a first analysis unit for extracting a first respiratory variability feature set from the respiratory signal by analyzing the respiratory signal; the first set of respiratory variability features comprises one or more respiratory statistical features;
the second analysis unit is used for extracting a first heart rate variability feature group from the heart rate signals by analyzing the heart rate signals; the first set of heart rate variability features comprises at least one of the following features: heart rate statistics in the time domain, frequency domain features in the frequency domain, and non-linear features in the non-linear domain.
In one embodiment, the state module 630 is further configured to process the first variability feature and the first motion state through the classification model to obtain a first emotional state corresponding to one or more time points within the detection period output by the classification model.
In one embodiment, the state module 630 is further configured to convert the first motion state into a numerical characteristic, resulting in a second motion state represented by the numerical characteristic; and processing the first variability characteristics and the second motion state represented by the numerical characteristics through the classification model to obtain a first emotion state corresponding to one or more time points in the detection time period output by the classification model.
In one embodiment, the state module 630, the second motion state represented by the numerical characteristic is represented by a plurality of first values in a gaussian distribution; the state module 630 is further configured to perform normalization processing on a plurality of first values and/or first variability features included in the second motion state to obtain a third motion state and a second variability feature having a consistent value range; the third motion state is represented by a plurality of second numerical values in a Gaussian distribution; and processing the third motion state and the second variability characteristic with consistent value ranges through the classification model.
In one embodiment, the emotion recognition apparatus 600 further includes a filtering unit;
the filtering unit is used for calculating transition probabilities corresponding to the first emotion states corresponding to the multiple time points through a filtering model; the transition probabilities are used to indicate the similarity between the respective first emotional states;
determining the first emotional state with the corresponding transition probability lower than a first threshold value in the first emotional states corresponding to the multiple time points as a mutation emotional state through a filtering model;
and filtering the mutation emotional state in the detection time period through a filtering model so as to output a second emotional state corresponding to other time points except the time point corresponding to the mutation emotional state in the detection time period.
In one embodiment, the obtaining module 610 is further configured to detect an activity status of the user;
and when the active state is the resting state, acquiring the vital sign data corresponding to the user at the first moment.
In one embodiment, the obtaining module 610, when the active state is the exercise state, re-detects the active state of the user until the active state is the resting state.
Referring to fig. 7, fig. 7 is a schematic structural diagram of a wearable device disclosed in the embodiment of the present application. As shown in fig. 7, the electronic device 700 may include:
a memory 710 storing executable program code;
a processor 720 coupled to the memory 710;
wherein, the processor 720 calls the executable program code stored in the memory 710 to execute any emotion recognition method disclosed in the embodiments of the present application.
The embodiment of the application discloses a computer readable storage medium which stores a computer program, wherein when the computer program is executed by a processor, the processor is enabled to realize any emotion recognition method disclosed in the embodiment of the application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present application, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, may be embodied in the form of a software product, stored in a memory, including several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the steps in the methods of the above embodiments may be implemented by program instructions associated with hardware, and the program may be stored in a computer-readable storage medium, wherein the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, disk Memory, or other storage device, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The emotion recognition method, the emotion recognition apparatus, the wearable device, and the storage medium disclosed in the embodiments of the present application are described in detail above, and specific examples are applied herein to explain the principles and implementations of the present application, and the descriptions of the above embodiments are only used to help understand the method and the core ideas of the present application. Meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. An emotion recognition method is applied to a wearable device, and comprises the following steps:
acquiring vital sign data corresponding to a user at a first moment, and acquiring a first motion state corresponding to the user at a second moment; the second time is earlier than the first time; the first motion state is determined according to user motion information corresponding to the user at the second moment;
extracting a first variability feature from the vital sign data;
obtaining a first emotional state corresponding to one or more time points during a detection time period from the first variability feature and the first motion state.
2. The method of claim 1, wherein the vital sign data includes a heart rate signal and a respiration signal; said extracting a first variability feature from said vital sign data, comprising:
extracting a first set of respiratory variability features from the respiratory signal by interpreting the respiratory signal; the first set of respiratory variability features comprises one or more respiratory statistical features;
extracting a first set of heart rate variability features from the heart rate signal by parsing the heart rate signal; the first set of heart rate variability features comprises at least one of: heart rate statistics in the time domain, frequency domain features in the frequency domain, and non-linear features in the non-linear domain.
3. The method according to claim 1, wherein said obtaining from said first variability feature and said first motion state a first emotional state at one or more points in time during a detection time period comprises:
processing the first variability feature and the first motion state through a classification model to obtain a first emotional state output by the classification model corresponding to one or more time points within the detection time period.
4. The method of claim 3, wherein prior to said processing of said first variability feature and said first motion state by a classification model, said method further comprises:
converting the first motion state into a numerical characteristic to obtain a second motion state represented by the numerical characteristic;
said processing said first variability feature and said first motion state through a classification model to obtain a first emotional state output by said classification model corresponding to one or more time points within said detection period, comprising:
processing the first variability feature and the second motion state represented by the numerical feature through a classification model to obtain a first emotional state corresponding to one or more time points within the detection time period output by the classification model.
5. The method according to claim 4, characterized in that the numerically characterized second motion state is represented by a plurality of first values in a Gaussian distribution; said processing of said first variability feature and said numerically characterized second motion state through a classification model comprising:
normalizing the plurality of first values and/or the first variability features included in the second motion state to obtain a third motion state and a second variability feature with consistent value ranges; the third motion state is represented by a plurality of second numerical values in a gaussian distribution;
and processing the third motion state and the second variability characteristic with consistent value ranges through a classification model.
6. The method of claim 1, wherein after said obtaining a first emotional state corresponding to a plurality of time points within a detection time period, the method further comprises:
calculating transition probabilities corresponding to the first emotion states corresponding to the multiple time points through a filtering model; the transition probabilities are used to indicate the similarity between the respective first emotional states;
determining the first emotional state with the transition probability lower than a first threshold value as a mutation emotional state in the first emotional states corresponding to the multiple time points through the filtering model;
and filtering the mutation emotional state in the detection time period through the filtering model so as to output a second emotional state corresponding to other time points except the time point corresponding to the mutation emotional state in the detection time period.
7. The method according to claim 1, wherein the obtaining vital sign data corresponding to the user at a first time comprises:
detecting an activity state of a user;
and when the active state is a resting state, acquiring vital sign data corresponding to the user at a first moment.
8. The method of claim 7, wherein after the detecting the activity state of the user, the method further comprises:
and when the active state is the motion state, re-detecting the active state of the user until the active state is the rest state.
9. An emotion recognition apparatus, comprising:
the acquisition module is used for acquiring vital sign data corresponding to a user at a first moment and acquiring a first motion state corresponding to the user at a second moment; the second time is earlier than the first time; the first motion state is determined according to user motion information corresponding to the user at the second moment;
the feature module is used for extracting a first variability feature from the vital sign data;
a state module to obtain a first emotional state corresponding to one or more time points within a detection time period from the first variability feature and the first motion state.
10. A wearable device comprising a memory and a processor, the memory having stored therein a computer program that, when executed by the processor, causes the processor to implement the method of any of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 8.
CN202210463142.6A 2022-04-28 2022-04-28 Emotion recognition method and device, wearable device and storage medium Pending CN115120236A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210463142.6A CN115120236A (en) 2022-04-28 2022-04-28 Emotion recognition method and device, wearable device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210463142.6A CN115120236A (en) 2022-04-28 2022-04-28 Emotion recognition method and device, wearable device and storage medium

Publications (1)

Publication Number Publication Date
CN115120236A true CN115120236A (en) 2022-09-30

Family

ID=83376078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210463142.6A Pending CN115120236A (en) 2022-04-28 2022-04-28 Emotion recognition method and device, wearable device and storage medium

Country Status (1)

Country Link
CN (1) CN115120236A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116725538A (en) * 2023-08-11 2023-09-12 深圳市昊岳科技有限公司 Bracelet emotion recognition method based on deep learning

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116725538A (en) * 2023-08-11 2023-09-12 深圳市昊岳科技有限公司 Bracelet emotion recognition method based on deep learning
CN116725538B (en) * 2023-08-11 2023-10-27 深圳市昊岳科技有限公司 Bracelet emotion recognition method based on deep learning

Similar Documents

Publication Publication Date Title
WO2020119245A1 (en) Wearable bracelet-based emotion recognition system and method
Pollreisz et al. A simple algorithm for emotion recognition, using physiological signals of a smart watch
CN105877766B (en) A kind of state of mind detection system and method based on the fusion of more physiological signals
WO2017179696A1 (en) Biological information analysis device and system, and program
EP2698112B1 (en) Real-time stress determination of an individual
CN109480868B (en) Intelligent infant monitoring system
CN107106085A (en) Apparatus and method for sleep monitor
Park et al. Individual emotion classification between happiness and sadness by analyzing photoplethysmography and skin temperature
CN108888277A (en) Psychological test method, system and terminal device
KR102015097B1 (en) Apparatus and computer readable recorder medium stored program for recognizing emotion using biometric data
CN112057066A (en) Heart rate detection method, wearable device and computer storage medium
CN115670419A (en) Data processing method and system based on motion information predicted by smart watch
CN115120236A (en) Emotion recognition method and device, wearable device and storage medium
Mekruksavanich et al. A deep residual-based model on multi-branch aggregation for stress and emotion recognition through biosignals
CN115089179A (en) Psychological emotion insights analysis method and system
CN107970027A (en) A kind of radial artery detection and human body constitution identifying system and method
Sakri et al. A multi-user multi-task model for stress monitoring from wearable sensors
Mo et al. Human daily activity recognition with wearable sensors based on incremental learning
Mallol-Ragolta et al. Outer product-based fusion of smartwatch sensor data for human activity recognition
CN112328072A (en) Multi-mode character input system and method based on electroencephalogram and electrooculogram
CN105147249B (en) The wearable or implantable devices evaluation system of one kind and method
CN111436940A (en) Gait health assessment method and device
Majumder et al. A smart cyber-human system to support mental well-being through social engagement
CN115770028A (en) Blood pressure detection method, system, device and storage medium
CN111315296B (en) Method and device for determining pressure value

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination