CN113143274A - Emotion early warning method based on camera - Google Patents

Emotion early warning method based on camera Download PDF

Info

Publication number
CN113143274A
CN113143274A CN202110352232.3A CN202110352232A CN113143274A CN 113143274 A CN113143274 A CN 113143274A CN 202110352232 A CN202110352232 A CN 202110352232A CN 113143274 A CN113143274 A CN 113143274A
Authority
CN
China
Prior art keywords
data
emotion
emotional
sub
basic data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110352232.3A
Other languages
Chinese (zh)
Other versions
CN113143274B (en
Inventor
李风华
刘正奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Psychology of CAS
Original Assignee
Beijing Jingzhan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingzhan Information Technology Co ltd filed Critical Beijing Jingzhan Information Technology Co ltd
Priority to CN202110352232.3A priority Critical patent/CN113143274B/en
Publication of CN113143274A publication Critical patent/CN113143274A/en
Application granted granted Critical
Publication of CN113143274B publication Critical patent/CN113143274B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/746Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a camera-based emotion early warning method, which comprises the steps of obtaining a human body photo in real time through a camera, finding one or more faces existing in an image by using a face detection algorithm, finding a hair-free area of the face by using a relative geometric relation, measuring and calculating the brightness change of the image in the area to obtain a continuous array, further obtaining the heart beating interval of a monitored person, combining the facial expression of the monitored person, obtaining the emotion condition of the monitored person through an emotion judgment model, and further judging whether to alarm.

Description

Emotion early warning method based on camera
Technical Field
The invention relates to the field of emotion monitoring and early warning, in particular to an emotion early warning method based on a camera.
Background
In daily work and life of people, the influence of emotional states on work effects is obvious, and particularly, the influence of emotional states of workers is obvious when the workers work continuously for a long time or need to pay attention to work. For example, students can actually be influenced by their own emotional states in the listening state in a classroom, and if a teacher can properly guide the students according to their emotional states, the teaching efficiency can be inevitably improved.
For some particularly important work posts, such as workers in a control room of the nuclear power station, the attention is highly focused, the running state of the nuclear power station is judged in real time, if the emotional state of the workers can be known in real time, shift change is timely reminded when the emotional state is unstable or obviously tired, the safety condition of the nuclear power station can be further enhanced inevitably, and potential safety hazards caused by artificial careless omission are eliminated fundamentally.
There are some special posts, such as bus drivers, which can pose potential threats to public safety if their emotions are in extreme states, and can certainly further enhance social security conditions if they can timely know the emotional states of the drivers and can remind or alarm when necessary.
However, there is no technical solution for effectively determining the emotion of a normal worker, and although the human facial expression can be determined by a face recognition method, the human facial expression cannot fully reflect the emotional state of the human body, so the research level in this respect needs to be further improved.
In addition, the data obtained by the prior art measurement is of a single dimension, namely the degree of tension, and also called the degree of "frustration" (frustrated) in some researches, and these psychological fields are collectively called the arousal degree of emotion (namely the dimension changing from calmness to excitement), however, the emotion is not only of the dimension, the emotion valence (namely the property of the positive-negative dimension of the emotion) is also an important judgment index, the positive and negative judgment of the emotion is lost, and the emotion judgment is incomplete. As well as a very aggressive state, there may be two extremes of rage (passive) and mania (active), as well as a very calm state, and there may be two very different states of hopelessness (passive) and peace.
For the reasons, the inventor of the present invention has made an intensive study on the existing emotion recognition and determination method, and is expected to design a camera-based emotion early warning method that can solve the above problems.
Disclosure of Invention
In order to overcome the problems, the inventor of the invention makes a keen study and designs an emotion early warning method based on a camera, in the method, a human body photo is obtained in real time through the camera, one or more faces existing in an image are found by using a face detection algorithm, a hair-free area of the face is found by using a relative geometric relation, the brightness change of the image in the area is measured and calculated to obtain a continuous array, further the heart beating interval of a monitored person is obtained, the facial expression of the monitored person is combined, the emotion condition of the monitored person is obtained through an emotion judgment model, further, whether an alarm is needed is judged, and therefore the invention is completed.
Specifically, the present invention aims to provide the following: a camera-based emotion early warning method comprises the following steps:
step 1, shooting in real time through a camera to obtain an image photo containing the face of a monitored person;
step 2, distinguishing the identity information of each monitored person in the image picture through face recognition, and setting a corresponding independent storage unit for each monitored person;
step 3, obtaining the heart beat interval and the facial expression of the monitored person by reading the image picture, and storing the heart beat interval and the facial expression in an independent storage unit;
step 4, inputting the heart beating interval and the facial expression into an emotion judgment model in real time to judge the emotion condition of the monitored person;
and 5, sending alarm information when the emotion of the monitored person is in an early warning state.
Wherein the step 3 comprises the sub-steps of,
a substep a, screening the image photos obtained in the step 1, and deleting the image photos which cannot be used for reading the heart beating interval;
in the rest image photos, searching a hair-free part of the face as a detection area by using a relative geometric relation;
measuring and calculating the image brightness change of the area in the continuous image photos to obtain a continuous array,
preferably, the array is a curve for describing the heartbeat activity of a human, the average brightness represents diastole, namely an electrocardio wave trough, when the average brightness is higher, the average brightness represents systole, namely an electrocardio wave crest, when the average brightness is lower; the time interval between two peaks is the heart beat interval.
Wherein the emotion judgment model is obtained by the following substeps:
substep 1, collecting physiological data and facial expressions by a collecting device, the physiological data including heart beat intervals, and converting the physiological data into activity indicators of sympathetic nerves and parasympathetic nerves;
setting an emotion awakening tag and an emotion valence tag, recording specific emotion arousing degree in the emotion awakening tag, recording specific emotion valence in the emotion valence tag, and combining the comprehensive nerve activity index data, the facial expression data and the emotion tag into basic data;
substep 3, adjusting the format of the basic data to obtain basic data with a uniform format, and judging whether the basic data with the uniform format meets the requirement;
substep 4, selecting available data from the basic data meeting the requirement and in a unified format;
and a substep 5 of obtaining an emotion judgment model according to the available data in the substep 4.
Wherein each integrated neural activity indicator includes one or more of the following data: the activity index of the sympathetic nerve, the activity index of the parasympathetic nerve, the quotient of the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve, the sum of the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve, and the difference between the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve.
Wherein, the sub-step of judging whether the basic data in the unified format meets the requirement in the sub-step 3 comprises the following sub-steps:
a sub-substep 1, dividing all basic data with uniform format into a learning group and a checking group according to a predetermined proportion at random;
a sub-step 2, flushing the model by using the data in the learning group, verifying the model one by using each data in the inspection group, and respectively recording the verification result of each data in the inspection group;
a sub-step 3, repeating the sub-step 1 and the sub-step 2, wherein the basic data in the unified format which is once distributed into the inspection group is not distributed into the inspection group any more, and the basic data in the unified format is ensured to verify the model which is washed by the data in the learned group in the inspection group until the verification results corresponding to all the basic data in the unified format are obtained;
and a sub-step 4, calculating the total passing rate of all basic data verification results in the unified format, wherein when the total passing rate is greater than 85%, the basic data in the unified format meets the requirement, otherwise, deleting the basic data in the unified format, and repeating the sub-steps 1 and 2.
Wherein, the obtaining of the available data in the substep 4 comprises the following substeps:
a sub-step a, repeating sub-steps 1-3 for multiple times, and obtaining a test group consisting of different basic data with uniform formats when repeating sub-step 1 each time; enabling the basic data in each uniform format to correspond to a plurality of verification results, and then respectively calculating the average passing rate corresponding to the basic data in each uniform format;
sub-step b, finding and hiding 1 piece of basic data with the lowest average passing rate and in the unified format, executing sub-steps 1-4 again by using the residual basic data in the unified format, observing whether the total passing rate is increased compared with that before hiding the data, if the total passing rate is increased, deleting the hidden basic data in the unified format, and executing sub-step c; if the total passing rate is not improved, recovering the hidden data, selecting and hiding basic data in a uniform format with the second lowest average passing rate, and repeating the above processes until the total passing rate is improved;
and a sub-step c, repeating the sub-step a and the sub-step b on the basis of the remaining basic data in the unified format after the total passing rate is increased, and continuing repeating the sub-step a and the sub-step b on the basis of the currently remaining basic data in the unified format after the total passing rate is increased until the total passing rate reaches more than 90% or the deleted basic data in the unified format reaches 30% of the basic data in the unified format, wherein the remaining basic data in the unified format is available data.
In substep 5, the emotion judgment model comprises an emotion awakening prediction model and an emotion valence prediction model;
in the process of obtaining the emotion judgment model, the comprehensive nerve activity index data, the facial expression data and the emotion awakening data in each available data are spliced into a data segment which is used as a learning material, and the emotion awakening prediction model is obtained through machine learning;
and splicing the comprehensive nerve activity index data, the facial expression data and the emotion valence data in each available data into a data segment, using the data segment as a learning material, and obtaining an emotion valence prediction model through machine learning.
Wherein, the emotional conditions in the step 4 comprise emotional excitement degree and emotional valence.
Wherein, in step 5, the emotional condition of the monitored person is compared with the average emotional excitement degree value and the average emotional effect value in the calm state,
when the monitoring personnel are engaged in important, difficult or intensive tasks,
the emotional agitation degree is 1.5 standard deviations higher than the average emotional agitation degree value in a calm state,
or the emotional agitation degree is 1 standard deviation lower than the average emotional agitation degree value in a calm state,
or when the emotional valence is lower than the average emotional valence value of 1.5 standard deviations in a calm state, alarm information is sent;
when the monitoring personnel is engaged in ordinary work,
the emotional agitation degree is 1.5 standard deviations higher than the average emotional agitation degree value in a calm state,
or the emotional agitation degree is 1 standard deviation lower than the average emotional agitation degree value in a calm state,
or when the emotional valence is lower than the standard average emotion valence value of 1.5 standard deviations in a calm state, sending alarm information;
when the emotional agitation degree of the monitored personnel is higher than the average emotional agitation degree value in the calm state by 2 standard deviations and the emotional valence of the monitored personnel is lower than the average emotional valence value in the calm state by 2 standard deviations, the monitored personnel can potentially threaten public safety and send alarm information.
The invention has the advantages that:
(1) according to the emotion early warning method based on the camera, provided by the invention, different alarm conditions can be set according to different working properties of a monitored person, so that the application range of the method is expanded;
(2) according to the camera-based emotion early warning method provided by the invention, the emotion judgment model based on the scouring of a large number of samples is arranged, so that the emotion state of a monitored person can be accurately and timely calculated according to image information;
(3) according to the camera-based emotion early warning method, the two-dimensional emotion evaluation scheme is adopted, not only can the awakening of emotion be measured, but also the emotion valence is estimated, compared with the emotion evaluation technology of 2 classification or 4 classification, the technology can output 100 emotion evaluations with different strengths and properties, the result is more real, the result is close to common knowledge and is easier to understand by people, and therefore the usability is higher in actual production life;
(4) according to the emotion early warning method based on the camera, whether the monitored person is suitable for participating in important, high-difficulty and high-strength operation can be judged in real time; and can also judge whether the monitored person is experiencing intense psychological activity and has threat to public safety in real time.
Drawings
Fig. 1 illustrates an overall logic diagram of a camera-based emotion warning method according to a preferred embodiment of the present invention;
FIG. 2 shows a change of emotional excitement degree of a monitored person in a day, which is obtained by a camera-based emotion early warning method in an embodiment of the invention;
FIG. 3 shows the emotional valence change of a monitored person in one day, which is obtained by a camera-based emotional early warning method in the embodiment of the invention;
fig. 4 shows an evaluation display interface of a monitored person for his own emotional condition during one day according to an embodiment of the invention.
Detailed Description
The invention is explained in more detail below with reference to the figures and examples. The features and advantages of the present invention will become more apparent from the description.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
According to the emotion early warning method based on the camera, as shown in fig. 1; the method comprises the following steps:
step 1, shooting in real time through a camera to obtain an image photo containing the face of a monitored person;
step 2, distinguishing the identity information of each monitored person in the image picture, and setting a corresponding independent storage unit for each monitored person;
step 3, obtaining the heart beat interval and the facial expression of the monitored person by reading the image picture, and storing the heart beat interval and the facial expression in an independent storage unit;
step 4, inputting the heart beating interval and the facial expression into an emotion judgment model in real time to judge the emotion condition of the monitored person;
and 5, sending alarm information when the emotion of the monitored person is in an early warning state.
In a preferred embodiment, in step 1, the camera may be disposed near a working location of a monitored person, such as an operator of a nuclear power plant control room, the monitored person may be placed near a display screen, such as a student, the camera may be placed near a blackboard, such as a bus driver, the camera may be placed near a front window, that is, the camera is preferably placed in a direction of the face of the monitored person, so as to capture the face of the monitored person as much as possible, the focal length of the camera is adjustable, and a plurality of monitored persons may be simultaneously photographed.
In a preferred embodiment, in step 2, the identity information of each monitored person in the image picture is distinguished by a face recognition method, and a corresponding independent storage unit is set for each monitored person; the face recognition method can be processed by an open-source openface face recognition tool.
In a preferred embodiment, said step 3 comprises the sub-steps of,
a substep a, screening the image photos obtained in the step 1, and deleting the image photos which cannot be used for reading the heart beating interval;
specifically, the images to be deleted include: the method comprises the following steps that firstly, in a 1000 millisecond time window, the change of the brightness mean value of a face image is larger than 1 (the brightness range is 0-255); secondly, images of the human face which are not captured by the openface; and thirdly, the profile displacement of the human face exceeds 1 percent of the longitudinal resolution of the picture within 1000 milliseconds (if the longitudinal resolution is 1080 pixels, the picture in the period is deleted in principle if the profile displacement exceeds 10 pixels).
In the rest image photos, searching a hair-free part of the face as a detection area by using a relative geometric relation; namely, the forehead and the cheek are used as detection regions.
Measuring and calculating the image brightness change of the area in the continuous image photos to obtain a continuous array,
preferably, the array is a curve for describing the heartbeat activity of a human, the average brightness represents diastole, namely an electrocardio wave trough, when the average brightness is higher, the average brightness represents systole, namely an electrocardio wave crest, when the average brightness is lower; the time interval between two peaks is the heart beat interval.
Filtering an original signal by using a Butterworth filter, reserving a 0.5-2Hz waveband signal, and searching a peak and a trough in the filtered signal, wherein the peak is a point with a numerical value larger than that of two sides in a 500 millisecond signal window, and the trough is a point with a numerical value smaller than that of two sides in a 500 millisecond signal bed;
preferably, generating a set of data in this application requires continuous acquisition of frames for at least 7 seconds at a sampling frequency of no less than 25 Hz.
Preferably, the data acquired by acquiring continuous image data with the camera is color or black and white, which can be directly used for black and white images, and for color images, which preferably use the green channel of red, green and blue. In addition, through the transformation of a color coordinate system, the red, green and blue channels can be converted into HSV or HLS color space, the value of a V channel (namely a brightness channel) is used as an input index in the HSV color space, and the value of an L channel (also called a brightness channel) is used as an input index in the HLS color space. The time frequency for acquiring the images is above 25Hz, i.e. at least 25 frames of continuous pictures are acquired per second.
In a preferred embodiment, the emotion judgment model is obtained by the following substeps:
substep 1, collecting physiological data and facial expressions by a collecting device, the physiological data including heart beat intervals, and converting the physiological data into activity indicators of sympathetic nerves and parasympathetic nerves; the cardiac beat intervals are also referred to as R-R intervals.
Substep 2, setting an emotion awakening tag and an emotion valence tag, selecting a specific emotion arousing degree in the emotion awakening tag, selecting a specific emotion valence in the emotion valence tag, and combining the comprehensive nerve activity index data, the facial expression data and the emotion tag into basic data;
substep 3, adjusting the format of the basic data to obtain basic data with a uniform format, and judging whether the basic data with the uniform format meets the requirements or not;
substep 4, selecting available data from the basic data meeting the requirement and in a unified format;
and a substep 5 of obtaining an emotion judgment model according to the available data in the step 4.
In a preferred embodiment, the collection device comprises a wearable bracelet, a smart watch, and a camera. Preferably, the collecting device may further comprise a massage chair, a treadmill or the like. The physiological data are collected by the collecting device and the label data are recorded, all the data can be transmitted to a remote server in real time for statistical storage, and a storage chip can be integrated in the collecting device for real-time storage and calculation processing.
Preferably, in sub-step 1, two sets of data, namely an activity index of sympathetic nerves and an activity index of parasympathetic nerves, are output according to the collected corresponding conversion of each heart beat interval, so that the scheme in the application has finer time granularity.
In substep 1, the two nerves jointly influence the heart beat and the periodic pre-photographic effect of the neural activity on each other finally constitutes the heart rate variability.
Preferably, the emotional awakening tag is provided with a plurality of numerical values capable of representing emotional awakening degrees, the corresponding numerical values can be selected according to actual conditions, preferably, the emotional awakening tag is provided with 5-10 numerical value gears, and the closest numerical value gear is selected according to actual conditions of participants. The emotional arousal label is characterized by emotional arousal degree, the lowest numerical value represents complete calmness, and the larger numerical value represents the more violent emotion.
The emotion valence tag is provided with a plurality of numerical values capable of representing emotion valence, the corresponding numerical values can be selected according to actual conditions, preferably, the emotion valence tag is provided with 2-10 numerical value gears, and the closest numerical value gear is selected according to actual conditions of participants. The emotion valence labels indicate the positive and negative degrees of emotion, the lowest value represents the most negative, and the larger value represents the more positive emotion. The data formats in the two emotion valence tags with the same numerical value gear are uniform, and the data formats in the two emotion awakening tags with the same numerical value gear are uniform.
Preferably, the normalized emotional arousal score is adopted in the emotional arousal tag as an original tag score;
preferably, the emotion titer tag adopts the PANAS standard score as an original tag score, wherein the positive emotion: average 29.7, standard deviation: 7.9; negative emotions: average 14.8, standard deviation 5.4.
Further preferably, in both the emotional arousal tag and the emotional valence tag, 10 parts are divided according to the frequency of data distribution by plus or minus 1.96 standard deviation ranges of the numerical range.
Preferably, in sub-step 2, the emotional tag includes an emotional arousal tag and an emotional valence tag, and the two tags may be provided separately or simultaneously in the form of coordinates or a chart. The emotion awakening tag is used for recording emotion awakening data, and the emotion valence tag is used for recording emotion valence data.
Preferably, in sub-step 2, the synthetic neural activity indicator is associated with an activity indicator of sympathetic nerves and an activity indicator of parasympathetic nerves, each synthetic neural activity indicator including one or more of the following data: an activity index of the sympathetic nerve, an activity index of the parasympathetic nerve, a quotient of the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve, a sum of the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve, a difference between the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve, and the like.
Preferably, the frequency of acquiring the comprehensive neural activity index data is high, 60-90 or even more groups of the comprehensive neural activity index data can be provided every minute, the facial expression of the time is shot by the camera as much as possible when the comprehensive neural activity index data is acquired, the facial expression plays an auxiliary role in model processing, and if the facial expression is acquired more comprehensively, the model effect is enhanced.
The emotion label acquisition frequency is relatively low, the emotion label acquisition frequency can be once per hour, or 2-5 times per day, and when the emotion label is acquired each time, the facial expression at that time is acquired through the camera. Therefore, each emotion label data corresponds to a plurality of comprehensive nerve activity index data, and one emotion label data, one facial expression data and the plurality of comprehensive nerve activity index data corresponding to the emotion label data are combined to form one basic data. Wherein each emotion tag data comprises emotional arousal data and emotional valence data.
Preferably, the numerical value gears in the emotion valence tag and the emotion awakening tag may be the same or different, and the problem of mismatching or data misplacement may occur during data statistics, for this reason, in sub-step 3, adjusting the format of the basic data mainly includes adjusting the numerical value and the numerical value gear in the emotion tag data; specifically, the number of standard numerical value gears is set, if the number of standard numerical value gears is set to 5 numerical value gears, the numerical value gears in the basic data are adjusted to 5, the gear numerical value selected from the basic data is adjusted to the gear numerical value under the condition of 5 numerical value gears according to the proportion, and the gear numerical value is rounded up when the gear numerical value cannot be divided.
In a preferred embodiment, the step of determining whether the base data in the unified format meets the requirements in the substep 3 comprises the substeps of:
a sub-substep 1, dividing all basic data with uniform format into two groups randomly according to a predetermined proportion, namely a learning group and a checking group; preferably, the ratio may be 8-9: 1, and more preferably, the ratio of the number of data in the learning group to the number of data in the test group is 8: 1;
a sub-step 2, flushing the model by using the data in the learning group, verifying the model one by using each data in the inspection group, and respectively recording the verification result of each data in the inspection group, wherein preferably, the verification result comprises a verification pass and a verification fail; the verification is passed by bringing the comprehensive nerve activity index data and the facial expression data of the basic data in the same format in the test group into the model, and the obtained emotion label data are consistent with the emotion label data in the basic data, namely the emotion excitement degree and the emotion valence are consistent; the verification failure means that the comprehensive nerve activity index data and the facial expression data of the basic data in the test group are brought into the model, and the obtained emotion label data are inconsistent with the emotion label data in the basic data, namely the emotion excitement degree and/or the emotion valence are inconsistent;
a sub-step 3, repeating the sub-step 1 and the sub-step 2 for a plurality of times, wherein basic data in a unified format which is once distributed into the inspection group is not distributed into the inspection group any more, and each basic data in the unified format is ensured to verify a model which is washed by data in a learned group in the inspection group until all basic data in the unified format obtain corresponding verification results;
a sub-step 4 of calculating the total passing rate of the verification results of all the basic data in the unified format, wherein the total passing rate is the ratio of the verification passing quantity of the verification results of all the basic data in the unified format to the quantity of the basic data in all the unified formats; when the total passing rate is not more than 85%, the basic data in the unified format is considered to be not in accordance with the basic requirements, all the basic data are abandoned, and the substep 1 and the substep 2 are repeated to obtain new basic data again; when the result in the sub-substep 4, namely the total passing rate, is greater than 70%, the basic data in the unified format is considered to meet the use requirement, and the next processing can be carried out.
In a preferred embodiment, the obtaining of the available data in substep 4 comprises the following substeps:
and a sub-step a, removing outlier data aiming at each model-parameter combination by a gradient method, and screening out a model with high ecological utility. Specifically, sub-steps 1-3 in sub-step 3 are repeated for multiple times, and each time sub-step 1 is repeated, a test set composed of different basic data in a uniform format is obtained, namely all test sets are different; preferably, the sub-substeps 1 to 3 are repeated for 8 to 10 times, so that each basic data in the unified format corresponds to a plurality of verification results, and then the average passing rate corresponding to each basic data in the unified format is respectively calculated; the average passing rate corresponding to the basic data in the unified format is the ratio of the number of passing verification in the verification results corresponding to the basic data in the unified format to the total number of the verification results corresponding to the basic data in the unified format.
A sub-step b, finding and hiding 1 piece of basic data with the lowest average passing rate and in the unified format, and when the average passing rates of multiple pieces of basic data with the unified format are consistent and lowest, hiding one piece of basic data randomly, wherein the hidden data does not participate in any calculation processing before being recovered; finding and utilizing the residual basic data in the unified format to execute sub-steps 1-4 again, observing whether the total passing rate is increased compared with that before hiding the data, if the total passing rate is increased, deleting the hidden basic data in the unified format, and executing sub-step c; if the total passing rate is not improved, recovering the hidden data, and selecting and hiding basic data in the unified format with the second lowest average passing rate, wherein if the average passing rates of a plurality of basic data in the unified format are the same and the lowest, the basic data in the unified format with the lowest hit rate can be selected; repeating the processes until the total passing rate is increased;
a sub-step c, repeating the sub-step a and the sub-step b on the basis of the remaining basic data in the unified format after the total passing rate is increased, and continuously repeating the sub-step a and the sub-step b on the basis of the currently remaining basic data in the unified format after the total passing rate is increased until the total passing rate reaches more than 90%, preferably more than 92%; or until the deleted basic data with the uniform format reaches 30% of the total basic data with the uniform format, the remaining basic data with the uniform format is the available data.
Preferably, the models in the sub-step 2 include most models with supervised learning, and the washing process of the models includes comprehensive judgment of a plurality of supervised models, and the specific washing process includes, but is not limited to, washing methods using linear regression, support vector machine, gradient descent method, naive bayes classification, decision tree classification, AdaBoost, XGBoost, multilayer neural network, and the like. Preferably, the average value of 2 results which are closer to each other in the results of the 3 models of the multilayer neural network with the 3-4 layers, the C4.5 decision tree and the XGboost is used as the output value of each flushing, namely, the multilayer neural network with the 3-4 layers, the C4.5 decision tree and the XGboost are combined into the most preferable model, namely, the model with high ecological utility. Preferably, in the present application, the neural network selects a one-dimensional convolutional neural network.
In the step 5, in the process of obtaining the emotion judgment model, the comprehensive nerve activity index data, the facial expression data and the emotion awakening data in each available data are spliced into a data segment which is used as a learning material, and the emotion awakening emotion judgment model is obtained through machine learning;
splicing the comprehensive nerve activity index data, the facial expression data and the emotion valence data in each available data into a data segment, using the data segment as a learning material, and obtaining an emotion valence emotion judgment model through machine learning; the emotion judgment model comprises the emotion awakening prediction model and the emotion valence prediction model.
Preferably, in step 5, in the learning process of the emotion awakening prediction model and the emotion valence prediction model, a neural network, a C4.5 decision tree and an XGBoost model of a 3-4-layer structure are established by using the comprehensive neural activity index, the facial expression data and the label data at the same time, so as to obtain a multilayer neural network model, a decision tree model and an XGBoost calculation module model, a combination of the three models is used as an emotion judgment model, and the emotion judgment model outputs an average value of two output values which are the closest to each other in the outputs of the three models. For example, for a set of data, three models each give an output result of 8, 20, and 7, and the output result 7 and the output result 8 are close to each other, then the output result 7 of the final model is, i.e., the average of 7 and 8, and rounded down.
In a preferred embodiment, in sub-step 1-5, 1000 participants of each age are tracked for 2 weeks to 2 months to obtain tracking data. Physiological data of the participants come from wearable equipment such as a smart watch and the like and a scanning sensor, facial expression data come from a camera, and scoring data come from the daily self-evaluation of the participants; the physiological data is continuously tracked for 24 hours in a mode of acquiring 90 seconds of data every 10 minutes; the participants are required to evaluate the degree of excitement and the emotional valence at least 3 times a day in the aspect of the scoring data of the emotional arousal tag and the emotional valence tag, and facial expression photos are shot through a camera each time the evaluation emotional arousal tag and the emotional valence tag are filled in.
In a preferred embodiment, in step 4, the emotion judgment model comprises an emotional arousal prediction model and an emotional valence prediction model based on the existing emotion judgment model. And inputting the heart beating interval and the facial expression in the storage unit into the two models to obtain corresponding emotional arousal and emotional valence.
Specifically, the heart beat interval RRI is first converted into sympathetic and parasympathetic outputs of the integrated neural activity indicator:
using Laguerre function recursion to make the dependent variable be a nearest RRI and make the independent variable be 8 Laguerre recursion decomposition terms X, each decomposition term is composed of an unknown coefficient G, an inferable coefficient phi and an RRI value, and the overall estimation expression is as shown in the following formula (one):
Figure BDA0003002448500000161
where S represents the upper limit of j, the order of the laguerre polynomial, which determines how many RRIs were used in the past to fit an expression, the more the order, the more accurate the result, preferably 9 are used; j represents the order of the orthogonal laguerre discrete time function; g (j, t) represents a coefficient matrix obtained by combining j-order Laguerre polynomials and RRI interval time in t time range, wherein the coefficient in the coefficient matrix is the coefficient of each included RRI, so that a plurality of RRIs are merged into a recursion Laguerre polynomial, and the last RRI is fitted by the past RRIs to form a recursion relation; f (t) represents the inclusion of the calculated position count for a particular interval in the sequence of interval between adjacent heart beats; n represents the serial number of the RRI traced back forward from the RRI; RRF(t)-nRepresenting any RRI, obtained by laguerre polynomial recursion;
Figure BDA0003002448500000171
an orthogonal laguerre discrete time function representing the j order, obtained by the following formula (two);
Figure BDA0003002448500000172
alpha is a constant, and the value of alpha is 0.2;
and calculating the nearest RRI, taking 8 RRIs as the RRIs with the same or more in the reverse direction of time, and substituting the RRIs into the RRI combination to form the RRI ═ sigma (i belongs to 0-2) Xi + ∑ (i belongs to 3-8) Xi. 8 unknown coefficients G are solved by using Kalman autoregression. Substituting sigma (i belongs to 0-2) NiGi and sigma (i belongs to 3-8) NiGi respectively represent sympathetic and parasympathetic output values in the synthetic neural activity index. The matched coefficients N are constants 39, 10, -5, 28, -17, 6, 12, 6, -7, -6, -4 respectively.
Inputting the comprehensive nerve activity index and the facial expression into an emotion awakening prediction model, and inputting the comprehensive nerve activity index and the facial expression into an emotion valence prediction model, wherein the two models are respectively processed as follows:
the emotion awakening prediction model comprises a multilayer neural network model with a 3-4 layer structure, a C4.5 decision tree model and an XGboost calculation module model, after receiving the comprehensive neural activity index and the facial expression, the emotion awakening prediction model obtains values respectively output by the multilayer neural network model with the 3-4 layer structure, the C4.5 decision tree model and the XGboost calculation module model, selects 2 relatively close values from the three output values, and calculates the average value of the two values to serve as an output result of the emotion awakening model.
The emotion valence prediction model also comprises a multilayer neural network model with a 3-4 layer structure, a C4.5 decision tree model and an XGboost calculation module model, after receiving the comprehensive neural activity index and the facial expression, the emotion valence prediction model obtains values respectively output by the multilayer neural network model with the 3-4 layer structure, the C4.5 decision tree model and the XGboost calculation module model, selects 2 closer values from the three output values, and calculates the average value of the two values to serve as an output result of the emotion valence prediction model.
And finally, obtaining the corresponding emotional arousal degree and emotional valence degree, namely the emotional condition of the monitored person.
In a preferred embodiment, in step 5, the emotional condition of the monitored person is compared to the average emotional arousal level value and the average emotional valence value in the calm state,
when the monitoring personnel are engaged in important, difficult or intensive tasks,
the emotional agitation degree is 1.5 standard deviations higher than the average emotional agitation degree value in a calm state,
or the emotional agitation degree is 1 standard deviation lower than the average emotional agitation degree value in a calm state,
or when the emotional valence is lower than the average emotional valence value of 1.5 standard deviations in a calm state, alarm information is sent;
in the calm state, a data set collected by at least 100 participants in the calm state is selected and is a set of all collected emotion values, the data set is subjected to averaging and standard deviation, the average value and the standard deviation are used as judgment bases, and a critical value for triggering alarm is set at a position where the average value is added with 1.5 or 1 standard deviation; namely, in the emotion values obtained through the emotion judgment model, if the obtained emotion effect value is higher than the average emotion effect value plus 1-1.5 times of standard deviation, or the obtained emotion effect value is lower than the average emotion effect value minus 1-1.5 times of standard deviation, or the obtained emotion surging value is higher than the average surging value plus 1-1.5 times of standard deviation, or the obtained emotion surging value is lower than the average surging value minus 1-1.5 times of standard deviation, whether to trigger an alarm or not is selected according to different personnel conditions.
The important, high-difficulty or high-intensity operation refers to the operation of engineering machinery such as a high-altitude operation, a driver, a crane and the like, and the control and maintenance of important facilities such as a power plant and the like which need to be in a dangerous environment for a long time or in a dangerous working environment for a long time;
the common work refers to the occupation which is not in high risk or has larger potential dangerous environment and has common labor intensity, such as clerks, editors, service industry practitioners, students, teachers, librarians and the like,
the emotional agitation degree is 1.5 standard deviations higher than the average emotional agitation degree value in a calm state,
or the emotional agitation degree is 1 standard deviation lower than the average emotional agitation degree value in a calm state,
or when the emotional valence is lower than the average emotional valence value of 1.5 standard deviations in a calm state, alarm information is sent;
when the emotional agitation degree of the monitored personnel is higher than the average emotional agitation degree value in the calm state by 2 standard deviations and the emotional valence of the monitored personnel is lower than the average emotional valence value in the calm state by 2 standard deviations, the monitored personnel can potentially threaten public safety and send alarm information. Wherein the potential threat mainly relates to persons in public places, such as bus drivers, crane excavator drivers, passengers in railway stations or airport waiting rooms, and the like, which may endanger the safety of others under the condition of overstrain.
The average emotion arousal degree value is a middle value representing calmness in the emotion arousal tag in the substep 2, and the average emotion effectiveness value is a middle value representing calmness in the emotion valence tag in the substep 2.
Examples
Establishing a mood judging model, specifically, selecting 100 participants, continuously tracking all the participants for 1 month, wherein the participants live in the shooting range of a camera for at least 12 hours every day, at least 8 hours of the participants can be captured to the face by the camera, acquiring the facial expressions of the participants and the brightness change of the hair-free part of the face in real time through the camera, converting the brightness change into heart beating interval data in real time, converting the heart beating interval data into activity indexes of sympathetic nerves and activity indexes of parasympathetic nerves, recording the mood excitation degree in a mood arousal label for 3 times every day by the participants, recording the mood valence in the mood valence label, wherein the mood excitation degree and the mood valence in the emotion label are 10 numerical value gears, wherein the average mood excitation degree and the mood valence in the morning of the participants on the same day are recorded by the participants on the same day, the average degree of emotional agitation and emotional titer in the afternoon of the day are recorded every afternoon, and the average degree of emotional agitation and emotional titer in the evening of the day are recorded every evening.
610820 pieces of RRI data and 415128 pieces of facial expression data are obtained in total, then the RRI data are converted into activity indexes of sympathetic nerves and activity indexes of parasympathetic nerves, 9000 records containing emotional arousal labels and emotional valence labels are obtained by collecting the data, one piece of emotional label data and a plurality of pieces of comprehensive nerve activity index data corresponding to the emotional label data are combined into one piece of basic data, 9000 pieces of basic data are formed,
and (3) obtaining the total passing rate, randomly dividing all 9000 pieces of basic data into 9 parts, taking one part as a checking group and taking the other parts as a learning group, flushing the model through the learning group, verifying the model by using data in the checking group to obtain a verification result of each checking group data, taking the data in the other parts as the checking group, repeating the steps for 9 times, ensuring that each data is distributed to the checking group once, namely each data obtains a corresponding verification result, obtaining the total passing rate of 88 percent and higher than 85 percent, and carrying out next processing.
And eliminating abnormal data in the basic data to obtain usable data, specifically,
calculating the average passing rate, dividing all basic data into 9 parts again, taking one part as a test group and the other parts as a learning group, flushing the model through the learning group, and verifying the model by using the data in the test group to obtain the verification result of each data; then, the checking group and the learning group are redistributed, the process is repeated for at least 81 times, and each basic data is guaranteed to be classified into the checking group for at least 9 times, namely each basic data obtains 9 corresponding verification results, and further the average passing rate of each basic data is obtained;
finding and hiding 1 piece of basic data with the lowest average passing rate, utilizing the remaining 8999 pieces of basic data to execute the process of obtaining the average passing rate and the total passing rate again, observing whether the total passing rate is increased compared with that before hiding the data, and deleting the hidden basic data with the unified format if the total passing rate is increased; if the total passing rate is not improved, recovering the hidden data, selecting and hiding the basic data with the second lowest average passing rate, and repeating the process of obtaining the total passing rate until the total passing rate is improved;
and deleting the hidden data after the hit rate is increased, continuously executing the process of obtaining the average passing rate on the basis of the rest basic data, calculating the average passing rate corresponding to each basic data, searching and hiding the data with the lowest average passing rate, obtaining the total passing rate on the basis of the data with the lowest average passing rate, and continuously repeating the removing process.
And after the hit rate is increased, deleting the hidden data, and continuously repeating the process on the basis of the rest basic data.
And stopping when the number of the deleted data reaches 2700, and obtaining the rest data as the available data.
An emotional arousal prediction model and an emotional valence prediction model are obtained from the available data, and in particular,
the method comprises the steps that available data are used for scouring a one-dimensional convolutional neural network to obtain a one-dimensional convolutional neural network model, available data are used for scouring a C4.5 decision tree to obtain a C4.5 decision tree model, available data are used for scouring an XGboost calculation module to obtain an XGboost calculation module model, and the three models are combined to form an emotion judgment model; when the emotion judgment model receives new comprehensive nerve activity indexes and facial expression information, copying the received information into 3 parts, and respectively transmitting the 3 parts to a one-dimensional convolutional neural network model, a C4.5 decision tree model and an XGboost calculation module model, wherein the output value of the emotion judgment model is the average value of 2 closer values in the output of 3 models given by the three models, so that an emotion awakening prediction model and an emotion valence prediction model are obtained, and the emotion judgment model is obtained.
Selecting 50 monitored personnel working in a nuclear power station main control room, and shooting in real time through a camera to obtain an image photo containing the face of the monitored personnel;
distinguishing the identity information of each monitored person in the image picture through face recognition, and setting a corresponding independent storage unit for each monitored person;
obtaining the heart beat interval and the facial expression of the monitored person by reading the image picture, and storing the heart beat interval and the facial expression in an independent storage unit;
inputting the heart beating interval and the facial expression into an emotion judgment model in real time to judge the emotion condition of the monitored person;
after 12 hours, the emotional change condition of the 50 monitored persons within 12 hours is obtained, wherein the emotional condition of one person is shown in fig. 2 and fig. 3; wherein, fig. 2 shows the change curve of the emotional arousal, i.e. the emotional agitation degree of the monitored person, the abscissa in the graph represents the time, the ordinate represents the emotional agitation degree, and the higher the value is, the stronger the agitation degree is represented; in fig. 2, the dotted line in the middle represents the average emotional arousal value;
FIG. 3 shows a variation curve of the emotional valence of the monitored person, wherein the abscissa of the graph represents time, the ordinate represents emotional valence value, and higher values represent larger emotional valence; in fig. 3, the dotted line in the middle represents the average emotional valence value;
the emotional arousal degree of 50 monitored personnel is always within the range of the average arousal degree value plus or minus 0.7 standard deviation, the emotional valence is kept within the range of the average emotional effect value plus or minus 1 standard deviation under the calm state, and alarm information does not need to be sent out.
Then, the monitored person is asked to self-evaluate the emotional change condition within 14 hours, and fig. 4 shows the self-evaluation of one monitored person, wherein the evaluation mode adopts a sliding bar interface, and the specific evaluation scheme is as follows: giving an emotional condition evaluation value of the user within 6 hours after getting up, namely giving an emotional evaluation value in a time period from getting up to giving evaluation for the first time; giving the emotional condition evaluation value of the user in the afternoon within 6-10 hours after getting up, namely giving the emotional evaluation value in the time period from the first giving of the evaluation to the second giving of the evaluation; giving an emotional condition evaluation value of the evening period after getting up 10 and before sleeping, namely giving an emotional evaluation value in the time period from giving the evaluation for the second time to giving the evaluation for the third time;
the self-evaluation emotional conditions of 50 monitored personnel are counted and compared with the emotional conditions obtained by the emotion judgment model, and the matching rate of the self-evaluation emotional conditions is found to reach 85%.
According to the results, the emotion early warning method based on the camera can timely and accurately judge the emotion change condition of the monitored person.
The present invention has been described above in connection with preferred embodiments, but these embodiments are merely exemplary and merely illustrative. On the basis of the above, the invention can be subjected to various substitutions and modifications, and the substitutions and the modifications are all within the protection scope of the invention.

Claims (9)

1. A camera-based emotion early warning method is characterized by comprising the following steps:
step 1, shooting in real time through a camera to obtain an image photo containing the face of a monitored person;
step 2, distinguishing the identity information of each monitored person in the image picture through face recognition, and setting a corresponding independent storage unit for each monitored person;
step 3, obtaining the heart beat interval and the facial expression of the monitored person by reading the image picture, and storing the heart beat interval and the facial expression in an independent storage unit;
step 4, inputting the heart beating interval and the facial expression into an emotion judgment model in real time to judge the emotion condition of the monitored person;
and 5, sending alarm information when the emotion of the monitored person is in an early warning state.
2. The camera-based emotional early warning method of claim 1,
said step 3 comprises the sub-steps of,
a substep a, screening the image photos obtained in the step 1, and deleting the image photos which cannot be used for reading the heart beating interval;
in the rest image photos, searching a hair-free part of the face as a detection area by using a relative geometric relation;
measuring and calculating the image brightness change of the area in the continuous image photos to obtain a continuous array,
preferably, the array is a curve for describing the heartbeat activity of a human, the average brightness represents diastole, namely an electrocardio wave trough, when the average brightness is higher, the average brightness represents systole, namely an electrocardio wave crest, when the average brightness is lower; the time interval between two peaks is the heart beat interval.
3. The camera-based emotional early warning method of claim 1,
the emotion judgment model is obtained by the following substeps:
substep 1, collecting physiological data and facial expressions by a collecting device, the physiological data including heart beat intervals, and converting the physiological data into activity indicators of sympathetic nerves and parasympathetic nerves;
setting an emotion awakening tag and an emotion valence tag, recording specific emotion arousing degree in the emotion awakening tag, recording specific emotion valence in the emotion valence tag, and combining the comprehensive nerve activity index data, the facial expression data and the emotion tag into basic data;
substep 3, adjusting the format of the basic data to obtain basic data with a uniform format, and judging whether the basic data with the uniform format meets the requirement;
substep 4, selecting available data from the basic data meeting the requirement and in a unified format;
and a substep 5 of obtaining an emotion judgment model according to the available data in the substep 4.
4. The camera-based emotional early warning method of claim 3,
each integrated neural activity indicator includes one or more of the following data: the activity index of the sympathetic nerve, the activity index of the parasympathetic nerve, the quotient of the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve, the sum of the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve, and the difference between the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve.
5. The camera-based emotional early warning method of claim 3,
the sub-step 3 of judging whether the basic data in the unified format meets the requirements comprises the following sub-steps:
a sub-substep 1, dividing all basic data with uniform format into a learning group and a checking group according to a predetermined proportion at random;
a sub-step 2, flushing the model by using the data in the learning group, verifying the model one by using each data in the inspection group, and respectively recording the verification result of each data in the inspection group;
a sub-step 3, repeating the sub-step 1 and the sub-step 2, wherein the basic data in the unified format which is once distributed into the inspection group is not distributed into the inspection group any more, and the basic data in the unified format is ensured to verify the model which is washed by the data in the learned group in the inspection group until the verification results corresponding to all the basic data in the unified format are obtained;
and a sub-step 4, calculating the total passing rate of all basic data verification results in the unified format, wherein when the total passing rate is greater than 85%, the basic data in the unified format meets the requirement, otherwise, deleting the basic data in the unified format, and repeating the sub-steps 1 and 2.
6. The camera-based emotional early warning method of claim 5,
the obtaining of available data in substep 4 comprises the following substeps:
a sub-step a, repeating sub-steps 1-3 for multiple times, and obtaining a test group consisting of different basic data with uniform formats when repeating sub-step 1 each time; enabling the basic data in each uniform format to correspond to a plurality of verification results, and then respectively calculating the average passing rate corresponding to the basic data in each uniform format;
sub-step b, finding and hiding 1 piece of basic data with the lowest average passing rate and in the unified format, executing sub-steps 1-4 again by using the residual basic data in the unified format, observing whether the total passing rate is increased compared with that before hiding the data, if the total passing rate is increased, deleting the hidden basic data in the unified format, and executing sub-step c; if the total passing rate is not improved, recovering the hidden data, selecting and hiding basic data in a uniform format with the second lowest average passing rate, and repeating the above processes until the total passing rate is improved;
and a sub-step c, repeating the sub-step a and the sub-step b on the basis of the remaining basic data in the unified format after the total passing rate is increased, and continuing repeating the sub-step a and the sub-step b on the basis of the currently remaining basic data in the unified format after the total passing rate is increased until the total passing rate reaches more than 90% or the deleted basic data in the unified format reaches 30% of the basic data in the unified format, wherein the remaining basic data in the unified format is available data.
7. The method for objective measurement of emotional state based on cardiac/pulse intervals according to claim 2,
in substep 5, the emotion judgment model comprises the emotion arousal prediction model and an emotion valence prediction model;
in the process of obtaining the emotion judgment model, the comprehensive nerve activity index data, the facial expression data and the emotion awakening data in each available data are spliced into a data segment which is used as a learning material, and the emotion awakening prediction model is obtained through machine learning;
and splicing the comprehensive nerve activity index data, the facial expression data and the emotion valence data in each available data into a data segment, using the data segment as a learning material, and obtaining an emotion valence prediction model through machine learning.
8. The camera-based emotional early warning method of claim 1,
the emotional conditions in step 4 include the degree of emotional agitation and the valence of emotion.
9. The camera-based emotional early warning method of claim 1,
in step 5, the emotional condition of the monitored person is compared with the average emotional excitement degree value and the average emotional effect value in the calm state,
when the monitoring personnel are engaged in important, difficult or intensive tasks,
the emotional agitation degree is 1.5 standard deviations higher than the average emotional agitation degree value in a calm state,
or the emotional agitation degree is 1 standard deviation lower than the average emotional agitation degree value in a calm state,
or when the emotional valence is lower than the average emotional valence value of 1.5 standard deviations in a calm state, alarm information is sent;
when the monitoring personnel is engaged in ordinary work,
the emotional agitation degree is 1.5 standard deviations higher than the average emotional agitation degree value in a calm state,
or the emotional agitation degree is 1 standard deviation lower than the average emotional agitation degree value in a calm state,
or when the emotional valence is lower than the average emotional valence value of 1.5 standard deviations in a calm state, alarm information is sent;
when the emotional agitation degree of the monitored personnel is higher than the average emotional agitation degree value in the calm state by 2 standard deviations and the emotional valence of the monitored personnel is lower than the average emotional valence value in the calm state by 2 standard deviations, the monitored personnel can potentially threaten public safety and send alarm information.
CN202110352232.3A 2021-03-31 2021-03-31 Emotion early warning method based on camera Active CN113143274B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110352232.3A CN113143274B (en) 2021-03-31 2021-03-31 Emotion early warning method based on camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110352232.3A CN113143274B (en) 2021-03-31 2021-03-31 Emotion early warning method based on camera

Publications (2)

Publication Number Publication Date
CN113143274A true CN113143274A (en) 2021-07-23
CN113143274B CN113143274B (en) 2023-11-10

Family

ID=76886333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110352232.3A Active CN113143274B (en) 2021-03-31 2021-03-31 Emotion early warning method based on camera

Country Status (1)

Country Link
CN (1) CN113143274B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115316991A (en) * 2022-01-06 2022-11-11 中国科学院心理研究所 Self-adaptive recognition early warning method for excited emotion
WO2023137995A1 (en) * 2022-01-24 2023-07-27 中国第一汽车股份有限公司 Monitoring method for preventing scratching and theft of vehicle body, and vehicle body controller and vehicle

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101095612A (en) * 2006-06-28 2008-01-02 株式会社东芝 Apparatus and method for monitoring biological information
JP2012059107A (en) * 2010-09-10 2012-03-22 Nec Corp Emotion estimation device, emotion estimation method and program
US20140221866A1 (en) * 2010-06-02 2014-08-07 Q-Tec Systems Llc Method and apparatus for monitoring emotional compatibility in online dating
CN104112055A (en) * 2013-04-17 2014-10-22 深圳富泰宏精密工业有限公司 System and method for analyzing and displaying emotion
US20150025403A1 (en) * 2013-04-15 2015-01-22 Yonglin Biotech Corp. Mood analysis method, system, and apparatus
US20170007165A1 (en) * 2015-07-08 2017-01-12 Samsung Electronics Company, Ltd. Emotion Evaluation
US20170105662A1 (en) * 2015-10-14 2017-04-20 Panasonic Intellectual Property Corporation of Ame Emotion estimating method, emotion estimating apparatus, and recording medium storing program
CN107506716A (en) * 2017-08-17 2017-12-22 华东师范大学 A kind of contactless real-time method for measuring heart rate based on video image
US20180005137A1 (en) * 2016-06-30 2018-01-04 Cal-Comp Electronics & Communications Company Limited Emotion analysis method and electronic apparatus thereof
US20180314879A1 (en) * 2017-05-01 2018-11-01 Samsung Electronics Company, Ltd. Determining Emotions Using Camera-Based Sensing
CN108882883A (en) * 2015-12-09 2018-11-23 安萨尔集团有限公司 Parasympathetic autonomic nerves system is measured to while sympathetic autonomic nerves system to independent activities, related and analysis method and system
CN109670406A (en) * 2018-11-25 2019-04-23 华南理工大学 A kind of contactless emotion identification method of combination heart rate and facial expression object game user
CN109890289A (en) * 2016-12-27 2019-06-14 欧姆龙株式会社 Mood estimates equipment, methods and procedures
CN110200640A (en) * 2019-05-14 2019-09-06 南京理工大学 Contactless Emotion identification method based on dual-modality sensor
CN110422174A (en) * 2018-04-26 2019-11-08 李尔公司 Biometric sensor is merged to classify to Vehicular occupant state
CN111881812A (en) * 2020-07-24 2020-11-03 中国中医科学院针灸研究所 Multi-modal emotion analysis method and system based on deep learning for acupuncture
CN112220455A (en) * 2020-10-14 2021-01-15 深圳大学 Emotion recognition method and device based on video electroencephalogram signals and computer equipment
CN112263252A (en) * 2020-09-28 2021-01-26 贵州大学 PAD (PAD application aided differentiation) emotion dimension prediction method based on HRV (high resolution video) features and three-layer SVR (singular value representation)
US20210034891A1 (en) * 2019-08-01 2021-02-04 Denso Corporation Emotion estimation device
CN112507959A (en) * 2020-12-21 2021-03-16 中国科学院心理研究所 Method for establishing emotion perception model based on individual face analysis in video

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101095612A (en) * 2006-06-28 2008-01-02 株式会社东芝 Apparatus and method for monitoring biological information
US20140221866A1 (en) * 2010-06-02 2014-08-07 Q-Tec Systems Llc Method and apparatus for monitoring emotional compatibility in online dating
JP2012059107A (en) * 2010-09-10 2012-03-22 Nec Corp Emotion estimation device, emotion estimation method and program
US20150025403A1 (en) * 2013-04-15 2015-01-22 Yonglin Biotech Corp. Mood analysis method, system, and apparatus
CN104112055A (en) * 2013-04-17 2014-10-22 深圳富泰宏精密工业有限公司 System and method for analyzing and displaying emotion
US20170007165A1 (en) * 2015-07-08 2017-01-12 Samsung Electronics Company, Ltd. Emotion Evaluation
US20170105662A1 (en) * 2015-10-14 2017-04-20 Panasonic Intellectual Property Corporation of Ame Emotion estimating method, emotion estimating apparatus, and recording medium storing program
CN108882883A (en) * 2015-12-09 2018-11-23 安萨尔集团有限公司 Parasympathetic autonomic nerves system is measured to while sympathetic autonomic nerves system to independent activities, related and analysis method and system
US20180005137A1 (en) * 2016-06-30 2018-01-04 Cal-Comp Electronics & Communications Company Limited Emotion analysis method and electronic apparatus thereof
CN109890289A (en) * 2016-12-27 2019-06-14 欧姆龙株式会社 Mood estimates equipment, methods and procedures
US20180314879A1 (en) * 2017-05-01 2018-11-01 Samsung Electronics Company, Ltd. Determining Emotions Using Camera-Based Sensing
CN110621228A (en) * 2017-05-01 2019-12-27 三星电子株式会社 Determining emotions using camera-based sensing
CN107506716A (en) * 2017-08-17 2017-12-22 华东师范大学 A kind of contactless real-time method for measuring heart rate based on video image
CN110422174A (en) * 2018-04-26 2019-11-08 李尔公司 Biometric sensor is merged to classify to Vehicular occupant state
CN109670406A (en) * 2018-11-25 2019-04-23 华南理工大学 A kind of contactless emotion identification method of combination heart rate and facial expression object game user
CN110200640A (en) * 2019-05-14 2019-09-06 南京理工大学 Contactless Emotion identification method based on dual-modality sensor
US20210034891A1 (en) * 2019-08-01 2021-02-04 Denso Corporation Emotion estimation device
CN111881812A (en) * 2020-07-24 2020-11-03 中国中医科学院针灸研究所 Multi-modal emotion analysis method and system based on deep learning for acupuncture
CN112263252A (en) * 2020-09-28 2021-01-26 贵州大学 PAD (PAD application aided differentiation) emotion dimension prediction method based on HRV (high resolution video) features and three-layer SVR (singular value representation)
CN112220455A (en) * 2020-10-14 2021-01-15 深圳大学 Emotion recognition method and device based on video electroencephalogram signals and computer equipment
CN112507959A (en) * 2020-12-21 2021-03-16 中国科学院心理研究所 Method for establishing emotion perception model based on individual face analysis in video

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
INTERNATIONAL JOURNAL OF DISTRIBUTED SENSOR NETWORKS: "Photoplethysmography based psychological stress detection with pulse rate variability feature differences and elastic net", pages 120 - 121 *
孔璐璐: "基于面部表情和脉搏信息融合的驾驶人愤怒情绪研究", pages 138 - 308 *
李昌竹, 郑士春, 陆梭等: "心率变异性与人格的神经质之间关系研究", 《心理与行为研究》, pages 275 - 280 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115316991A (en) * 2022-01-06 2022-11-11 中国科学院心理研究所 Self-adaptive recognition early warning method for excited emotion
CN115316991B (en) * 2022-01-06 2024-02-27 中国科学院心理研究所 Self-adaptive recognition early warning method for irritation emotion
WO2023137995A1 (en) * 2022-01-24 2023-07-27 中国第一汽车股份有限公司 Monitoring method for preventing scratching and theft of vehicle body, and vehicle body controller and vehicle

Also Published As

Publication number Publication date
CN113143274B (en) 2023-11-10

Similar Documents

Publication Publication Date Title
Haque et al. Deep multimodal pain recognition: a database and comparison of spatio-temporal visual modalities
US20210174934A1 (en) Remote assessment of emotional status
US11928632B2 (en) Ocular system for deception detection
Hossain et al. Classifying posed and real smiles from observers' peripheral physiology
US20210020295A1 (en) Physical function independence support device of physical function and method therefor
CN109222888A (en) A method of psychological test reliability is judged based on eye movement technique
CN112957042B (en) Non-contact target emotion recognition method and system
CN113143274A (en) Emotion early warning method based on camera
KR20140041382A (en) Method for obtaining information about the psychophysiological state of a living being
CN113647950A (en) Psychological emotion detection method and system
CN114792553A (en) Method and system for screening psychological health group of students
Mitsuhashi et al. Video-based stress level measurement using imaging photoplethysmography
CN108652587A (en) A kind of cognition dysfunction provisional monitor device
EP3424408B1 (en) Fatigue state determination device and fatigue state determination method
CN211862821U (en) Autism auxiliary evaluation system based on deep learning
Muaremi et al. Monitor pilgrims: prayer activity recognition using wearable sensors
Jain et al. Mental health state detection using open cv and sentimental analysis
CN110569968B (en) Method and system for evaluating entrepreneurship failure resilience based on electrophysiological signals
CN111898574A (en) Standing walking test analysis system and method
US10631727B2 (en) Method and system for detecting time domain cardiac parameters by using pupillary response
Anumas et al. Driver fatigue monitoring system using video face images & physiological information
CN114098729B (en) Heart interval-based emotion state objective measurement method
CN209734011U (en) Non-contact human body state monitoring system
CN113362951A (en) Human body infrared thermal structure attendance and health assessment and epidemic prevention early warning system and method
KR101940673B1 (en) Evaluation Method of Empathy based on micro-movement and system adopting the method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220311

Address after: 100101 courtyard 16, lincui Road, Chaoyang District, Beijing

Applicant after: INSTITUTE OF PSYCHOLOGY, CHINESE ACADEMY OF SCIENCES

Address before: 101400 3rd floor, 13 Yanqi street, Yanqi Economic Development Zone, Huairou District, Beijing

Applicant before: Beijing JingZhan Information Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant