CN113143274B - Emotion early warning method based on camera - Google Patents

Emotion early warning method based on camera Download PDF

Info

Publication number
CN113143274B
CN113143274B CN202110352232.3A CN202110352232A CN113143274B CN 113143274 B CN113143274 B CN 113143274B CN 202110352232 A CN202110352232 A CN 202110352232A CN 113143274 B CN113143274 B CN 113143274B
Authority
CN
China
Prior art keywords
emotion
data
sub
basic data
average
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110352232.3A
Other languages
Chinese (zh)
Other versions
CN113143274A (en
Inventor
李风华
刘正奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Psychology of CAS
Original Assignee
Institute of Psychology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Psychology of CAS filed Critical Institute of Psychology of CAS
Priority to CN202110352232.3A priority Critical patent/CN113143274B/en
Publication of CN113143274A publication Critical patent/CN113143274A/en
Application granted granted Critical
Publication of CN113143274B publication Critical patent/CN113143274B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/746Alarms related to a physiological condition, e.g. details of setting alarm thresholds or avoiding false alarms
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Psychiatry (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Social Psychology (AREA)
  • Psychology (AREA)
  • Hospice & Palliative Care (AREA)
  • Educational Technology (AREA)
  • Physiology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a camera-based emotion pre-warning method, which comprises the steps of obtaining a human body photo in real time through a camera, finding one or more human faces existing in an image through a human face detection algorithm, finding a face hairless area by utilizing a relative geometric relation, measuring and calculating the image brightness change of the area to obtain a continuous array, further obtaining the heart beat interval of a monitored person, obtaining the emotion condition of the monitored person through an emotion judgment model in combination with the facial expression of the monitored person, and further judging whether an alarm is needed or not.

Description

Emotion early warning method based on camera
Technical Field
The invention relates to the field of emotion monitoring and early warning, in particular to an emotion early warning method based on a camera.
Background
In daily work and life of people, the influence of the emotional state on the work effect is obvious, especially long-time continuous work or work needing high concentration, and the influence of the emotional state of workers is obvious. For example, students can receive the influence of their own emotional states in the class, and if teachers can conduct proper guidance according to the emotional states of the students, the teaching efficiency can be improved.
For some particularly important working posts, such as staff in a control room of the nuclear power station, the attention is required to be concentrated, the running state of the nuclear power station is judged in real time, if the emotional state of the staff can be known in real time, when the emotional state is unstable or obvious tired, the shift is timely reminded, the safety condition of the nuclear power station can be necessarily further enhanced, and the potential safety hazard caused by artificial omission is radically eliminated.
There are also special posts, such as bus drivers, which can pose a potential threat to public safety if their emotions are in extreme states, and can prompt or alert if necessary if the driver's emotional state can be known in time, which can necessarily further enhance the social safety condition.
However, at present, there is no technical solution capable of effectively measuring the emotion of a normal staff, and although facial expressions of a person can be judged by a face recognition mode, the facial expressions cannot fully reflect the emotion state of the person, so the research level in this aspect needs to be further improved.
In addition, the data measured by the prior art are single-dimension, namely stress degree, and also called "frustrated" degree in some researches, and these are collectively called emotion arousal degree (namely dimension ranging from silence to surprise) in the psychological field, however, emotion has only one dimension, emotion titer (namely the nature of the positive-negative dimension of emotion) is also an important distinguishing index, positive and negative judgment of emotion is lost, and emotion judgment is incomplete. As well as very aggressive states, there may be both extreme violent (negative) and fanciful (positive), very calm states, and very different states of hopeless (negative) and antique.
For the above reasons, the present inventors have conducted intensive studies on the existing emotion recognition and judgment method, in order to expect to design a camera-based emotion warning method capable of solving the above problems.
Disclosure of Invention
In order to overcome the problems, the inventor performs intensive research and designs a camera-based emotion warning method, in the method, a human body photo is obtained in real time through a camera, one or more human faces existing in an image are found through a face detection algorithm, a face hairless area is found through a relative geometric relationship, the image brightness change of the area is measured and calculated, a continuous array is obtained, the heart beat interval of a monitored person is further obtained, the emotion condition of the monitored person is obtained through an emotion judgment model in combination with the facial expression of the monitored person, and whether an alarm is needed or not is judged, so that the invention is completed.
In particular, it is an object of the present invention to provide the following aspects: a mood early warning method based on a camera comprises the following steps:
step 1, shooting in real time through a camera to obtain an image picture containing the face of a monitored person;
step 2, distinguishing the identity information of each monitored person in the image photo through face recognition, and setting a corresponding independent storage unit for each monitored person;
Step 3, obtaining heart beat intervals and facial expressions of the monitored person by reading the image photos, and storing the heart beat intervals and the facial expressions in an independent storage unit;
step 4, inputting the heart beat interval and the facial expression into an emotion judgment model in real time to judge the emotion condition of the monitored person;
and step 5, sending out alarm information when the emotion of the monitored person is in an early warning state.
Wherein, the step 3 comprises the following substeps,
a substep a of screening the image photos obtained in step 1, deleting the image photos which cannot be used for reading the heart beat interval;
step b, searching a face hairless part as a detection area in the rest image photos by utilizing a relative geometric relation;
a substep c, calculating the image brightness change of the region in the continuous image photos to obtain continuous arrays,
preferably, the array is a curve describing the human heartbeat activity, and the time when the average brightness is higher represents the diastole, namely the electrocardiographic trough, and the time when the average brightness is lower represents the systole, namely the electrocardiographic peak; the time interval between two peaks is the heart beat interval.
Wherein the emotion judgment model is obtained by the following substeps:
Substep 1, collecting, by a collecting device, physiological data and facial expressions, the physiological data comprising cardiac beat intervals, and converting the physiological data into activity indicators of sympathetic nerves and activity indicators of parasympathetic nerves;
step 2, setting an emotion awakening tag and an emotion valence tag, recording specific emotion excitation degree in the emotion awakening tag, recording specific emotion valence in the emotion valence tag, and combining comprehensive neural activity index data, facial expression data and emotion tag as basic data;
step 3, adjusting the format of the basic data to obtain basic data in a unified format, and judging whether the basic data in the unified format meets the requirements;
sub-step 4, selecting available data from the basic data in a uniform format meeting the requirements;
and 5, obtaining an emotion judgment model according to the available data in the substep 4.
Wherein each integrated neural activity indicator comprises one or more of the following data: the activity index of the sympathetic nerve, the activity index of the parasympathetic nerve, the quotient of the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve, the sum of the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve, and the difference between the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve.
Wherein, judging whether the basic data in the unified format meets the requirements in the substep 3 comprises the following substeps:
sub-step 1, randomly classifying all basic data in a unified format into a learning group and a checking group according to a preset proportion;
sub-step 2, flushing a model by using the data in the learning group, verifying the model one by using each data in the checking group, and respectively recording the verification result of each data in the checking group;
repeating the sub-step 1 and the sub-step 2, wherein the basic data of the unified format which is once distributed into the test group is not distributed into the test group any more, and ensuring that each basic data of the unified format verifies the model which is flushed by the data in the learned group in the test group until verification results corresponding to the basic data of all the unified formats are obtained;
and (4) in the sub-step, calculating the total passing rate of the verification results of the basic data in the unified format, when the total passing rate is more than 85%, the basic data in the unified format meets the requirements, otherwise, deleting the basic data in the unified format, and repeating the sub-step 1 and the sub-step 2.
Wherein the obtaining of the available data in sub-step 4 comprises the sub-steps of:
Sub-step a, repeating sub-steps 1-3 for a plurality of times, and obtaining a test group consisting of basic data in different unified formats when repeating sub-step 1 each time; the method comprises the steps that each unified format of basic data corresponds to a plurality of verification results, and then average passing rate corresponding to each unified format of basic data is calculated respectively;
sub-step b, finding and hiding 1 case of basic data with the lowest average passing rate and in a unified format, executing sub-step 1-4 again by utilizing the rest basic data with the unified format, observing whether the total passing rate is improved compared with that before hiding the data, deleting the hidden basic data with the unified format if the total passing rate is improved, and executing sub-step c; if the total pass rate is not improved, the hidden data is recovered, the basic data with the uniform format with the second lowest average pass rate is selected and hidden, and the above processes are repeated until the total pass rate is improved;
and c, repeating the sub-step a and the sub-step b based on the residual basic data in the unified format after the total pass rate is increased, and continuing to repeat the sub-step a and the sub-step b based on the residual basic data in the unified format until the total pass rate reaches more than 90% or the deleted basic data in the unified format reaches 30% of the total basic data in the unified format, wherein the residual basic data in the unified format is available data at the moment.
In the substep 5, the emotion judgment model includes the emotion wake prediction model and an emotion valence prediction model;
in the process of obtaining the emotion judgment model, the comprehensive neural activity index data, the facial expression data and the emotion awakening data in each available data are spliced into a data segment to be used as a learning material, and the emotion awakening prediction model is obtained through machine learning;
and (3) splicing the comprehensive neural activity index data, the facial expression data and the emotion valence data in each available data into a data segment, and obtaining an emotion valence prediction model through machine learning by taking the data segment as a learning material.
Wherein the emotional condition in the step 4 includes a degree of emotional agitation and an emotional titer.
Wherein in step 5, the emotional condition of the monitored person is compared with the average emotional state and the average emotional value,
when monitoring personnel are engaged in important, difficult or intensive tasks,
the emotion excitation degree is 1.5 standard deviations higher than the average emotion excitation degree value in the calm state,
or the emotion-motivating degree is lower than the average emotion-motivating degree value in the calm state by 1 standard deviation,
Or when the emotion titer is lower than the average emotion titer value by 1.5 standard deviations in the calm state, sending out alarm information;
when the monitoring person is engaged in ordinary work,
the emotion excitation degree is 1.5 standard deviations higher than the average emotion excitation degree value in the calm state,
or the emotion-motivating degree is lower than the average emotion-motivating degree value in the calm state by 1 standard deviation,
or when the emotion titer is lower than 1.5 standard deviations of the standard average emotion titer in the calm state, sending out alarm information;
when the emotion excitation degree of the monitored personnel is 2 standard deviations higher than the average emotion excitation degree value in the calm state, and the emotion titer is 2 standard deviations lower than the average emotion effective value in the calm state, the monitored personnel generates potential threat to public safety, and sends out alarm information.
The invention has the beneficial effects that:
(1) According to the emotion early warning method based on the camera, different alarm conditions can be set according to different working properties of a monitored person, so that the application range of the method is widened;
(2) According to the emotion early warning method based on the camera, provided by the invention, the emotion judgment model based on a large number of sample scour is arranged, so that the emotion state of a monitored person can be accurately and timely calculated according to image information;
(3) According to the emotion early warning method based on the camera, a two-dimensional emotion evaluation scheme is adopted, so that emotion arousal can be measured, emotion titers are estimated, and compared with the prior emotion evaluation technology of 2 classification or 4 classification, the emotion evaluation technology can output 100 emotion evaluations with different intensities and properties, the result is more real, and the emotion evaluation technology is close to common sense and is easier to be understood by people, so that the emotion early warning method based on the camera has usability in actual production and life;
(4) According to the emotion early warning method based on the camera, whether a monitored person is suitable for participating in important, high-difficulty and high-strength operation can be judged in real time; it is also possible to judge in real time whether the monitored person is experiencing intense psychological activities and has a threat to public security.
Drawings
Fig. 1 is a logic diagram showing an overall camera-based emotion warning method according to a preferred embodiment of the present invention;
fig. 2 shows a situation that a monitored person obtains a variation of emotion excitation degree in a day through a camera-based emotion early warning method in an embodiment of the present invention;
fig. 3 shows a situation of emotion valence change of a monitored person in a day, which is obtained by a camera-based emotion early warning method in an embodiment of the present invention;
FIG. 4 shows an assessment display of the emotional condition of a person being monitored during his day in an embodiment of the invention.
Detailed Description
The invention is further described in detail below by means of the figures and examples. The features and advantages of the present invention will become more apparent from the description.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
According to the emotion early warning method based on the camera, which is provided by the invention, as shown in fig. 1; the method comprises the following steps:
step 1, shooting in real time through a camera to obtain an image picture containing the face of a monitored person;
step 2, distinguishing identity information of each monitored person in the image photo, and setting a corresponding independent storage unit for each monitored person;
step 3, obtaining heart beat intervals and facial expressions of the monitored person by reading the image photos, and storing the heart beat intervals and the facial expressions in an independent storage unit;
Step 4, inputting the heart beat interval and the facial expression into an emotion judgment model in real time to judge the emotion condition of the monitored person;
and step 5, sending out alarm information when the emotion of the monitored person is in an early warning state.
In a preferred embodiment, in step 1, the camera may be disposed near the working position of the monitored person, for example, the monitored person is an operator of a control room of the nuclear power station, the camera may be disposed near the display screen, for example, the monitored person is a student, the camera may be disposed near the blackboard, for example, the monitored person is a bus driver, the camera may be disposed near a front window, that is, the camera is preferably disposed in the direction of the face of the monitored person, as many faces of the monitored person are captured as possible, the focal length of the camera may be adjustable, and a plurality of monitored persons may be photographed at the same time.
In a preferred embodiment, in step 2, distinguishing identity information of each monitored person in the image photo by a face recognition method, and setting a corresponding independent storage unit for each monitored person; the face recognition method can select an open face recognition tool for processing.
In a preferred embodiment, said step 3 comprises the sub-steps of,
a substep a of screening the image photos obtained in step 1, deleting the image photos which cannot be used for reading the heart beat interval;
specifically, the images to be deleted include: (1) an image in which the average change in brightness of the face image is greater than 1 (brightness range 0-255) in a time window of 1000 milliseconds; (2) an image of a face not captured by an openface; (3) the face contour displacement exceeds 1% of the vertical resolution of the picture in 1000 ms (if the vertical resolution is 1080 pixels, the contour displacement exceeds 10 pixels in principle, i.e. the picture is deleted during this period).
Step b, searching a face hairless part as a detection area in the rest image photos by utilizing a relative geometric relation; i.e. forehead and cheek are used as detection areas.
A substep c, calculating the image brightness change of the region in the continuous image photos to obtain continuous arrays,
preferably, the array is a curve describing the human heartbeat activity, and the time when the average brightness is higher represents the diastole, namely the electrocardiographic trough, and the time when the average brightness is lower represents the systole, namely the electrocardiographic peak; the time interval between two peaks is the heart beat interval.
Filtering an original signal by using a Butterworth filter, reserving a 0.5-2Hz wave band signal, and searching wave crests and wave troughs in the filtered signal, wherein the wave crests are points with the numerical value larger than two sides in a 500 millisecond signal window, and the wave troughs are points with the numerical value smaller than two sides in a 500 millisecond signal bed;
preferably in the present application, generating a set of data requires continuous acquisition of pictures for at least 7 seconds at a sampling frequency of not less than 25 Hz.
Preferably, the data acquired by the camera for acquiring continuous image data is color or black and white, and for black and white images, the color image is preferably the green channel of red, green and blue. In addition, through the transformation of a color coordinate system, the red, green and blue channels can be converted into an HSV (hue, saturation and saturation) color space, wherein a V channel (namely a brightness channel) value is used as an input index in the HSV color space, and an L channel (also called a brightness channel) value is used as an input index in the HLS color space. The temporal frequency of capturing the images should be above 25Hz, i.e. at least 25 frames of consecutive pictures per second are captured.
In a preferred embodiment, the emotion judgment model is obtained by the sub-steps of:
substep 1, collecting, by a collecting device, physiological data and facial expressions, the physiological data comprising cardiac beat intervals, and converting the physiological data into activity indicators of sympathetic nerves and activity indicators of parasympathetic nerves; the cardiac beat interval is also referred to as the R-R interval.
Step 2, setting an emotion awakening tag and an emotion valence tag, selecting a specific emotion excitation degree in the emotion awakening tag, selecting a specific emotion valence in the emotion valence tag, and combining comprehensive neural activity index data, facial expression data and emotion tag as basic data;
step 3, adjusting the format of the basic data to obtain basic data in a unified format, and judging whether the basic data in the unified format meets the requirements;
sub-step 4, selecting available data from the basic data in a uniform format meeting the requirements;
and 5, obtaining an emotion judgment model according to the available data in the step 4.
In a preferred embodiment, the collection device comprises a wearable wristband, a smart watch, and a camera. Preferably, the collecting device may further comprise a massage chair, a treadmill, etc. When physiological data is collected through the collecting device and tag data is recorded, all the data can be transmitted to a remote server in real time for statistical storage, and a storage chip can be integrated in the collecting device for real-time storage and calculation processing.
Preferably, in the substep 1, two sets of data are output according to the collected corresponding conversion of each cardiac pulse interval, which are respectively the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve, so that the scheme in the application has finer time granularity.
In sub-step 1, the two nerves together affect the heart beat and the periodic mutual pre-imaging of neural activity ultimately constitutes heart rate variability.
Preferably, a plurality of values capable of representing the emotion awakening degree are set in the emotion awakening tag, corresponding values can be selected according to actual conditions, preferably, 5-10 value gears are set in the emotion awakening tag, and the closest value gear is selected according to the actual conditions of the participants. The emotion wake-up label is characterized by emotion wake-up degree, the lowest value represents complete calm, and the larger the value is, the more the emotion is.
The emotion valence label is provided with a plurality of values capable of representing emotion valence, corresponding values can be selected according to actual conditions, preferably, the emotion valence label is provided with 2-10 value gears, and the closest value gear is selected according to actual conditions of participants. The emotional valence label is characterized by positive and negative degrees of emotion, the lowest value represents the most negative, and the larger the value is, the more positive the emotion is. The data formats in the two emotion valence labels with the same numerical gear are uniform, and the data formats in the two emotion wake labels with the same numerical gear are uniform.
Preferably, the normalized emotion wake score is adopted as an original label score in the emotion wake label;
preferably, the emotion valence label adopts a PANAS standard score as an original label score, wherein the emotion valence label is positive in emotion: equally 29.7, standard deviation: 7.9; negative emotion: average 14.8 and standard deviation 5.4.
Further preferably, in both the emotion wake up tag and the emotion valence tag, 10 parts are divided according to the frequency of the data distribution by a range of plus or minus 1.96 standard deviations of the numerical range.
Preferably, in sub-step 2, the emotion tags include an emotion wake tag and an emotion valence tag, which may be provided separately or simultaneously in the form of coordinates or a graph. The emotion wake-up tag is used for recording emotion wake-up data, and the emotion valence tag is used for recording emotion valence data.
Preferably, in substep 2, the integrated neural activity index is related to an activity index of sympathetic nerves and an activity index of parasympathetic nerves, each integrated neural activity index including one or more of the following data: the activity index of the sympathetic nerve, the activity index of the parasympathetic nerve, the quotient of the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve, the sum of the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve, the difference between the activity index of the sympathetic nerve and the activity index of the parasympathetic nerve, and the like.
Preferably, the collection frequency of the integrated neural activity index data is higher, 60-90 or even more groups of integrated neural activity index data can be provided per minute, and each time the integrated neural activity index data is collected, the facial expression at the time is shot through the camera as much as possible, and as the facial expression plays an auxiliary role in model processing, if the facial expression is collected more comprehensively, the model effect is enhanced.
The collection frequency of the emotion labels is relatively low, the emotion labels can be collected once per hour or 2-5 times per day, and when the emotion labels are collected each time, the facial expressions at that time are collected through the camera. Therefore, each emotion label data corresponds to a plurality of comprehensive neural activity index data, and one emotion label data, facial expression data and a plurality of comprehensive neural activity index data corresponding to the emotion label data are combined together to form one basic data. Wherein each emotion tag data comprises emotion wake data and emotion valence data.
Preferably, the numerical gear in the emotion valence label and the emotion wake label may be the same or different, and a problem of mismatch or data dislocation may occur in data statistics, for which, in the substep 3, adjusting the format of the basic data mainly includes adjusting the numerical value and the numerical gear in the emotion label data; specifically, the number of standard numerical gear is set first, if the number is set to 5 numerical gears, the numerical gears in the basic data are adjusted to 5, and then the gear values selected in the basic data are adjusted to the gear values under the condition of 5 numerical gears according to the proportion, and the gear values are rounded upwards when the gear values cannot be divided evenly.
In a preferred embodiment, the determining in sub-step 3 whether the base data in the unified format meets the requirements comprises the sub-steps of:
sub-step 1, randomly dividing all basic data in a unified format into two groups according to a preset proportion, namely a learning group and a checking group; preferably, the ratio may be 8 to 9:1, more preferably, the ratio of the number of data in the study group to the number of data in the test group is 8:1;
sub-step 2, flushing the model by using the data in the learning group, then verifying the model one by using each data in the checking group, and respectively recording the verification result of each data in the checking group, wherein the verification result preferably comprises verification passing and verification failing; the verification is that the comprehensive neural activity index data and the facial expression data of the basic data in a uniform format in the test group are brought into the model, and the obtained emotion label data are consistent with the emotion label data in the basic data, namely, the emotion excitation degree and the emotion valence are consistent; the verification is not passed, namely the comprehensive neural activity index data and the facial expression data of the basic data in the test group are brought into the model, and the obtained emotion label data are inconsistent with the emotion label data in the basic data, namely the emotion excitation degree and/or emotion valence are inconsistent;
Repeating the sub-step 1 and the sub-step 2 for a plurality of times, wherein the basic data in the unified format which is once distributed into the test group is not distributed into the test group any more, and each basic data in the unified format is ensured to verify the model which is flushed by the data in the learned group in the test group until all the basic data in the unified format obtain the corresponding verification result;
sub-step 4, calculating the total passing rate of the verification results of the basic data in all the unified formats, wherein the total passing rate is the ratio of the verification results of the basic data in all the unified formats to the number of the basic data in all the unified formats; when the total passing rate is not more than 85%, considering that the basic data in the unified format do not meet the basic requirements, completely discarding the basic data, repeating the substep 1 and the substep 2, and obtaining new basic data again; when the result in the substep 4, that is, the total pass rate is greater than 70%, the basic data in the unified format is considered to meet the use requirement, and the next processing can be performed.
In a preferred embodiment, the obtaining of the available data in sub-step 4 comprises the sub-steps of:
and a substep a, removing outlier data according to each model-parameter combination by using a gradient method, and screening out a model with high ecological utility. Specifically, sub-steps 1 to 3 in sub-step 3 are repeated a plurality of times, and each time sub-step 1 is repeated, a test group consisting of basic data in different unified formats is obtained, i.e., all test groups are different; preferably, the substeps 1-3 are repeated for 8-10 times, so that each unified format of basic data corresponds to a plurality of verification results, and then average passing rate corresponding to each unified format of basic data is calculated respectively; the average passing rate corresponding to the basic data in the unified format is the ratio of the number of verification passing in the verification results corresponding to the basic data in the unified format to the total number of verification results corresponding to the basic data in the unified format.
In the sub-step b, 1 case of basic data with the uniform format and the lowest average passing rate is found and hidden, when the average passing rate of the basic data with the uniform format is consistent and the minimum, one case of the basic data with the uniform format is hidden at will, and the hidden data does not participate in any calculation processing before being restored; finding and utilizing the residual basic data in the unified format to execute the sub-step 1-4 again, observing whether the total passing rate is improved compared with the total passing rate before hiding the data, deleting the hidden basic data in the unified format if the total passing rate is improved, and executing the sub-step c; if the total passing rate is not improved, the hidden data is recovered, and the basic data with the uniform format with the second lowest average passing rate is selected and hidden, wherein if the condition that the average passing rate of the basic data with a plurality of uniform formats is the same and the minimum average passing rate exists, the basic data with other uniform formats with the lowest hit rates can be selected again; repeating the above process until the total pass rate is improved;
repeating the sub-step a and the sub-step b based on the residual basic data in the unified format after the total pass rate is increased, and continuing to repeat the sub-step a and the sub-step b based on the current residual basic data in the unified format until the total pass rate reaches more than 90%, preferably more than 92%; or when the deleted basic data in the unified format reaches 30% of the total basic data in the unified format, the rest basic data in the unified format is available data.
Preferably, the model in the substep 2 comprises a majority of supervised learning models, and the flushing process of the model comprises comprehensive judgment of a plurality of supervised models, and the specific flushing process comprises, but is not limited to, flushing methods adopting linear regression, support vector machines, gradient descent methods, naive bayes classification, decision tree classification, adaBoost, XGBoost, multi-layer neural networks and the like. Preferably, the average value of 2 results which are relatively close to each other among the results of the 3-4-layer structure multi-layer neural network, the C4.5 decision tree and the XGBoost 3 models is used as the output value of each flushing, namely the 3-4-layer structure multi-layer neural network, the C4.5 decision tree and the XGBoost are combined into the most preferred model, namely the model with high ecological utility. In the present application, preferably, the neural network selects a one-dimensional convolutional neural network.
In step 5, in the process of obtaining the emotion judgment model, the comprehensive neural activity index data, the facial expression data and the emotion awakening data in each available data are spliced into a data segment, and the data segment is used as a learning material to obtain the emotion awakening emotion judgment model through machine learning;
the comprehensive neural activity index data, the facial expression data and the emotion valence data in each available data are spliced into a data segment, and as a learning material, an emotion valence emotion judgment model is obtained through machine learning; the emotion judgment model comprises the emotion arousal prediction model and an emotion valence prediction model.
Preferably, in step 5, in the learning process of the emotion wake-up prediction model and the emotion valence prediction model, three models of a neural network, a C4.5 decision tree and XGBoost for flushing a 3-4 layer structure are built by using the comprehensive neural activity index, the facial expression data and the tag data at the same time, so as to obtain a multi-layer neural network model, a decision tree model and an XGBoost calculation module model, and the combination of the three models is used as an emotion judgment model, wherein the output of the emotion judgment model is the average value of the two nearest output values in the three model outputs. For example, for a set of data, three models give output results, one 8, one 20, one 7, the output result 7 and the output result 8 being close to each other, respectively, the output result 7 of the final model is, i.e., the average of 7 and 8, and is rounded down.
In a preferred embodiment, in sub-steps 1-5, 1000 participants of each age group are collected by tracking, and tracking data is obtained for 2 weeks to 2 months. Physiological data of the participants come from wearable devices such as smart watches and scanning sensors, facial expression data come from cameras, and scoring data come from daily self-scores of the participants; physiological data is continuously tracked for 24 hours in a mode of acquiring 90 seconds of data every 10 minutes; the participants are required to evaluate their own degree of agitation and emotion titers at least 3 times per day in terms of scoring data of emotion wake tags and emotion titer tags, and facial expression photographs are taken through a camera each time the emotion wake tags and emotion titer tags are filled out.
In a preferred embodiment, in step 4, the emotion judgment model includes an emotion wake prediction model and an emotion valence prediction model on the basis of the existing emotion judgment model. And inputting the heart beat interval and the facial expression in the storage unit into the two models to obtain the corresponding emotion arousal and emotion titers.
Specifically, the cardiac beat interval RRI is first converted into sympathetic output and parasympathetic output that are indicative of integrated neural activity:
the method comprises the steps of utilizing a Laguerre function recursion formula, enabling a dependent variable to be a latest RRI, enabling independent variables to be 8 decomposition terms X of the Laguerre recursion formula, enabling each decomposition term to consist of an unknown coefficient G, an inferred coefficient phi and an RRI value, and enabling an overall estimation expression to be as shown in the following formula (one):
where S represents the upper bound of j, the order of the lagrangian, which determines how many RRIs in the past were used to fit an expression, the more orders, the more accurate the result, preferably 9 are used; j represents the order of the orthogonal Laguerre discrete time function; g (j, t) represents a coefficient matrix obtained by combining a Laguerre polynomial of the j-order and RRI interval time in a time range of t, wherein coefficients in the coefficient matrix are coefficients of each RRI which is included, so that a plurality of RRIs are integrated into a recursive Laguerre polynomial, the last RRI is fitted by the past RRIs, and a recursive relation is formed by the plurality of RRIs; f (t) represents the position ordinal number of a specific interval included in the calculated sequence of pre-and post-adjacent cardiac beat intervals; n represents the sequence number of the RRI that is traced back forward from this RRI; RR (RR) F(t)-n Representing any RRI, and recursively obtaining through a Laguerre polynomial;an orthogonal Laguerre discrete time function representing the j-order is obtained by the following formula (II);
alpha is a constant, and the value of alpha is 0.2;
from the latest RRI, 8 RRIs are reversely taken as the RRIs according to time to obtain RRI combination, and the RRI is formed into RRI (i E0-2) Xi plus (i E3-8) Xi. The 8 unknown coefficients G were found using Kalman autoregressive. Substituting sigma (i.e.0-2) NiGi and sigma (i.e.3-8) NiGi, respectively, represents the sympathetic and parasympathetic output values in the integrated neural activity index. The coefficients N used in conjunction therewith are constants 39, 10, -5, 28, -17,6, 12,6, -7, -6, -4, respectively.
The comprehensive neural activity index and the facial expression are input into an emotion wake-up prediction model, and the comprehensive neural activity index and the facial expression are input into an emotion valence prediction model, and the following processing is respectively carried out in the two models:
the emotion wake-up prediction model comprises a multi-layer neural network model with a 3-4 layer structure, a C4.5 decision tree model and an XGBoost calculation module model, after the emotion wake-up prediction model receives the comprehensive neural activity index and the facial expression, values respectively output by the multi-layer neural network model with the 3-4 layer structure, the C4.5 decision tree model and the XGBoost calculation module model are obtained, 2 relatively close values are selected from the three output values, and an average value of the two values is calculated to be used as an output result of the emotion wake-up model.
The emotion valence prediction model also comprises a multi-layer neural network model with a 3-4 layer structure, a C4.5 decision tree model and an XGBoost calculation module model, after the emotion valence prediction model receives the comprehensive neural activity index and the facial expression, the values respectively output by the multi-layer neural network model with the 3-4 layer structure, the C4.5 decision tree model and the XGBoost calculation module model are obtained, 2 relatively close values are selected from the three output values, and the average value of the two values is calculated to be used as the output result of the emotion valence prediction model.
And finally obtaining the corresponding emotion awakening degree and emotion valence degree, namely the emotion condition of the monitored person.
In a preferred embodiment, in step 5, the emotional condition of the monitored person is compared with the average emotional state threshold value and the average emotional value,
when monitoring personnel are engaged in important, difficult or intensive tasks,
the emotion excitation degree is 1.5 standard deviations higher than the average emotion excitation degree value in the calm state,
or the emotion-motivating degree is lower than the average emotion-motivating degree value in the calm state by 1 standard deviation,
or when the emotion titer is lower than the average emotion titer value by 1.5 standard deviations in the calm state, sending out alarm information;
The calm state is that a data set which is collected by at least 100 participants in the calm state is selected and is a set of all collected emotion values, and an average value and a standard deviation are calculated on the data set, so that the average value and the standard deviation are used as judgment basis, and a critical value for triggering an alarm is set at the position of adding 1.5 or 1 standard deviation to the average value; that is, in the emotion value obtained by the emotion judgment model, if the obtained emotion efficiency value is higher than the average emotion efficiency value by 1 to 1.5 times of standard deviation, or the obtained emotion efficiency value is lower than the average emotion efficiency value by 1 to 1.5 times of standard deviation, or the obtained emotion shock value is higher than the average shock value by 1 to 1.5 times of standard deviation, or the obtained emotion shock value is lower than the average shock value by 1 to 1.5 times of standard deviation, whether to trigger an alarm is selected according to different personnel conditions.
The important, high-difficulty or high-strength operation refers to operation work of engineering machinery such as high-altitude operation, drivers, cranes and the like, control maintenance work of important facilities such as a power plant and the like which are required to be in dangerous environments for a long time or in working environments where danger possibly occurs for a long time;
the general work refers to the occupation which is not in high risk or has a large potential dangerous environment and has common labor intensity, such as a clerk, an edit, a service practitioner, a student, a teacher, a librarian and the like,
The emotion excitation degree is 1.5 standard deviations higher than the average emotion excitation degree value in the calm state,
or the emotion-motivating degree is lower than the average emotion-motivating degree value in the calm state by 1 standard deviation,
or when the emotion titer is lower than the average emotion titer value by 1.5 standard deviations in the calm state, sending out alarm information;
when the emotion excitation degree of the monitored personnel is 2 standard deviations higher than the average emotion excitation degree value in the calm state, and the emotion titer is 2 standard deviations lower than the average emotion effective value in the calm state, the monitored personnel generates potential threat to public safety, and sends out alarm information. Among them, the potential threat mainly relates to personnel in public places, such as bus drivers, crane excavator drivers, passengers in railway stations or airport waiting rooms, etc., who may endanger the safety of others in the case of an overstress emotion.
The average emotion excitation degree value is an intermediate value representing calm in the emotion wake-up label in the sub-step 2, and the average emotion valence value is an intermediate value representing calm in the emotion valence label in the sub-step 2.
Examples
Establishing an emotion judgment model, specifically selecting 100 participants, continuously tracking all the participants for 1 month, wherein the participants live in the shooting range of a camera for at least 12 hours each day, wherein the face can be captured by the camera for at least 8 hours, the facial expression and the brightness change of the hair-free part of the face of the participants are obtained through the camera in real time, the brightness change is converted into heart beat-to-beat period data in real time, the heart beat-to-beat period data is converted into an activity index of sympathetic nerves and an activity index of parasympathetic nerves, in addition, the participants record the emotion excitation degree in an emotion wake-up tag for 3 times each day, the emotion titer is recorded in an emotion titer tag, and 10 numerical gears are included in the tag, wherein the average emotion excitation degree and emotion titer of the participants in the morning are recorded each day, the average emotion excitation degree and emotion titer of the afternoon of the participants in the afternoon are recorded each day, and the average excitation degree and emotion titer of the evenings of the ears are recorded each day afternoon.
610820 pieces of RRI data and 415128 pieces of facial expression data are obtained, the RRI data are converted into activity indexes of sympathetic nerves and activity indexes of parasympathetic nerves, 9000 pieces of records containing emotion wake-up labels and emotion valence labels are also obtained through collecting data, one piece of emotion label data and a plurality of pieces of comprehensive nerve activity index data corresponding to the emotion label data are combined into one piece of basic data, 9000 pieces of basic data are formed in a conformal mode,
the total pass rate is obtained, all 9000 pieces of basic data are randomly divided into 9 parts, one part is used as a test group, the other parts are used as a study group, the model is flushed through the study group, the model is verified by the data in the test group, the verification result of the data of each test group is obtained, the data in the other parts are used as the test group, the steps are repeated for 9 times, the total repeated circulation is ensured, each data is distributed into the test group, namely, each data is obtained to obtain the corresponding verification result, the total pass rate is 88 percent and is higher than 85 percent, and the next processing can be carried out.
And eliminating abnormal data in the basic data to obtain available data, specifically,
the average passing rate is calculated, all basic data are divided into 9 parts again, one part is used as a test group, the other parts are used as a study group, the model is flushed through the study group, and the model is verified by the data in the test group, so that a verification result of each data is obtained; then, the test group and the learning group are redistributed, at least 81 times of the above processes are repeated, and each basic data is ensured to be divided into the test group at least 9 times, namely, each basic data obtains 9 corresponding verification results, and further, the average passing rate of each basic data is obtained;
Finding and hiding 1 case of basic data with the lowest average passing rate, executing the process of calculating the average passing rate and the total passing rate again by using the rest 8999 pieces of basic data, observing whether the total passing rate is improved compared with that before hiding the data, and deleting the hidden basic data with the uniform format if the total passing rate is improved; if the total pass rate is not improved, the hidden data is recovered, the second lowest basic data of the average pass rate is selected and hidden, and the process of obtaining the total pass rate is repeated until the total pass rate is improved;
after the hit rate rises, deleting hidden data, continuously executing the process of calculating the average passing rate based on the rest basic data, calculating the average passing rate corresponding to each basic data, searching and hiding the data with the lowest average passing rate, calculating the total passing rate based on the data with the lowest average passing rate, and continuously repeating the rejecting process.
After the hit rate is raised, deleting the hidden data, and continuously repeating the process based on the residual basic data.
Stopping when the deleted data reaches 2700 pieces, and the rest data is available data.
A mood arousal prediction model and a mood valence prediction model are obtained from the available data, and in particular,
The method comprises the steps of flushing a one-dimensional convolutional neural network by using available data to obtain a one-dimensional convolutional neural network model, flushing a C4.5 decision tree by using available data to obtain a C4.5 decision tree model, flushing an XGBoost calculation module by using available data to obtain an XGBoost calculation module model, and combining the three models to form an emotion judgment model; when the emotion judgment model receives new comprehensive nerve activity indexes and facial expression information, the received information is copied into 3 parts and is respectively transmitted to a one-dimensional convolutional neural network model, a C4.5 decision tree model and an XGBoost calculation module model, the output value of the emotion judgment model is the average value of 2 relatively close values in 3 model outputs given by the three models, and therefore an emotion arousal prediction model and an emotion valence prediction model are obtained, and the emotion judgment model is obtained.
Selecting 50 monitored personnel working in a main control room of the nuclear power station, and shooting in real time through a camera to obtain an image picture containing the face of the monitored personnel;
distinguishing identity information of each monitored person in the image photo through face recognition, and setting a corresponding independent storage unit for each monitored person;
obtaining heart beat intervals and facial expressions of a monitored person by reading the image photos, and storing the heart beat intervals and the facial expressions in an independent storage unit;
Inputting the heart beat interval and facial expression into an emotion judgment model in real time to judge the emotion condition of the monitored person;
after a duration of 12 hours, the 50 monitored persons were presented with emotional changes over 12 hours, the emotional condition of one of which is shown in fig. 2 and 3; fig. 2 shows a change curve of the emotion arousal degree of the monitored person, wherein the abscissa in the graph represents time, the ordinate represents the emotion arousal degree, and the higher the numerical value is, the stronger the arousal degree is; in fig. 2, the middle broken line represents the average emotion threshold value;
FIG. 3 shows a graph of the change in emotional titer of the monitored person, wherein the abscissa of the graph represents time and the ordinate represents emotional titer, and a higher value represents greater emotional titer; in fig. 3, a dotted line located in the middle represents an average emotional value;
the emotion excitation degree of 50 monitored persons is always in the range of 0.7 standard deviation of the average excitation degree value, the emotion titer is kept in the range of 1 standard deviation of the average emotion effect value in a calm state, and alarm information does not need to be sent.
Then please self-evaluate the emotion change condition of the monitored person within 14 hours, fig. 4 shows self-evaluate of the monitored person, wherein the grading mode adopts a sliding bar interface, and the specific evaluation scheme is as follows: the emotional state evaluation value of the morning period is given within 6 hours after the bed is started, namely, the emotional evaluation value in the period from the bed starting to the first evaluation is given; the emotion state evaluation value of the user in the afternoon period is given within 6-10 hours after the user gets up, namely, the emotion evaluation value in the period from the first evaluation to the second evaluation is given; the evaluation value of the emotional condition is given in the period from 10 after getting up to before sleeping, namely, the evaluation value of the emotional condition is given in the period from the second time of giving the evaluation to the third time of giving the evaluation;
And counting the self-evaluation emotional conditions of 50 monitored personnel, and comparing the self-evaluation emotional conditions with the emotional conditions obtained by the emotion judgment model to find that the matching rate reaches 85%.
According to the result, the emotion early warning method based on the camera provided by the application can timely and accurately judge the emotion change condition of the monitored person.
The application has been described above in connection with preferred embodiments, which are, however, exemplary only and for illustrative purposes. On this basis, the application can be subjected to various substitutions and improvements, and all fall within the protection scope of the application.

Claims (5)

1. A mood early warning method based on a camera is characterized by comprising the following steps:
step 1, shooting in real time through a camera to obtain an image picture containing the face of a monitored person;
step 2, distinguishing the identity information of each monitored person in the image photo through face recognition, and setting a corresponding independent storage unit for each monitored person;
step 3, obtaining heart beat intervals and facial expressions of the monitored person by reading the image photos, and storing the heart beat intervals and the facial expressions in an independent storage unit;
step 4, inputting the heart beat interval and the facial expression into an emotion judgment model in real time to judge the emotion condition of the monitored person;
Step 5, when the emotion of the monitored person is in an early warning state, sending out alarm information;
said step 3 comprises the sub-steps of,
a substep a of screening the image photos obtained in step 1, deleting the image photos which cannot be used for reading the heart beat interval;
step b, searching a face hairless part as a detection area in the rest image photos by utilizing a relative geometric relation;
a substep c, calculating the image brightness change of the region in the continuous image photos to obtain continuous arrays,
the array is a curve for describing the heartbeat activity of a person, and the time with higher average brightness represents diastole, namely an electrocardiographic trough, and the time with lower average brightness represents systole, namely an electrocardiographic peak; the time interval between two wave peaks is the heart beat interval;
the emotion judgment model is obtained through the following substeps:
substep 1, collecting, by a collecting device, physiological data and facial expressions, the physiological data comprising cardiac beat intervals, and converting the physiological data into activity indicators of sympathetic nerves and activity indicators of parasympathetic nerves;
step 2, setting an emotion awakening tag and an emotion valence tag, recording specific emotion excitation degree in the emotion awakening tag, recording specific emotion valence in the emotion valence tag, and combining comprehensive neural activity index data, facial expression data and emotion tag as basic data;
Step 3, adjusting the format of the basic data to obtain basic data in a unified format, and judging whether the basic data in the unified format meets the requirements;
sub-step 4, selecting available data from the basic data in a uniform format meeting the requirements;
a substep 5 of obtaining an emotion judgment model from the available data in substep 4;
each integrated neural activity indicator package expands one or more of the following data: a sympatholytic activity index, a parasympathetic activity index, a quotient of the sympatholytic activity index and the parasympathetic activity index, a sum of the sympatholytic activity index and the parasympathetic activity index, and a difference between the sympatholytic activity index and the parasympathetic activity index;
in sub-step 5, the emotion judgment model includes an emotion wake prediction model and an emotion valence prediction model;
in the process of obtaining the emotion judgment model, the comprehensive neural activity index data, the facial expression data and the emotion awakening data in each available data are spliced into a data segment to be used as a learning material, and the emotion awakening prediction model is obtained through machine learning;
and (3) splicing the comprehensive neural activity index data, the facial expression data and the emotion valence data in each available data into a data segment, and obtaining an emotion valence prediction model through machine learning by taking the data segment as a learning material.
2. The method for camera-based emotion warning as recited in claim 1, characterized in that,
in the substep 3, judging whether the basic data in the unified format meets the requirements or not includes the following substeps:
sub-step 1, randomly classifying all basic data in a unified format into a learning group and a checking group according to a preset proportion;
sub-step 2, flushing a model by using the data in the learning group, verifying the model one by using each data in the checking group, and respectively recording the verification result of each data in the checking group;
repeating the sub-step 1 and the sub-step 2, wherein the basic data of the unified format which is once distributed into the test group is not distributed into the test group any more, and ensuring that each basic data of the unified format verifies the model which is flushed by the data in the learned group in the test group until verification results corresponding to the basic data of all the unified formats are obtained;
and (4) in the sub-step, calculating the total passing rate of the verification results of the basic data in the unified format, when the total passing rate is more than 85%, the basic data in the unified format meets the requirements, otherwise, deleting the basic data in the unified format, and repeating the sub-step 1 and the sub-step 2.
3. The method for camera-based emotion warning as recited in claim 2, characterized in that,
the obtaining of the available data in sub-step 4 comprises the sub-steps of:
sub-step a, repeating sub-steps 1-3 for a plurality of times, and obtaining a test group consisting of basic data in different unified formats when repeating sub-step 1 each time; the method comprises the steps that each unified format of basic data corresponds to a plurality of verification results, and then average passing rate corresponding to each unified format of basic data is calculated respectively;
sub-step b, finding and hiding 1 case of basic data with the lowest average passing rate and in a unified format, executing sub-step 1-4 again by utilizing the rest basic data with the unified format, observing whether the total passing rate is improved compared with that before hiding the data, deleting the hidden basic data with the unified format if the total passing rate is improved, and executing sub-step c; if the total pass rate is not improved, the hidden data is recovered, the basic data with the uniform format with the second lowest average pass rate is selected and hidden, and the above processes are repeated until the total pass rate is improved;
and c, repeating the sub-step a and the sub-step b based on the residual basic data in the unified format after the total pass rate is increased, and continuing to repeat the sub-step a and the sub-step b based on the residual basic data in the unified format until the total pass rate reaches more than 90% or the deleted basic data in the unified format reaches 30% of the total basic data in the unified format, wherein the residual basic data in the unified format is available data at the moment.
4. The method for camera-based emotion warning as recited in claim 1, characterized in that,
the emotional condition in step 4 includes a degree of emotional agitation and an emotional titer.
5. The method for camera-based emotion warning as recited in claim 1, characterized in that,
in step 5, the emotional condition of the monitored person is compared with the average emotional state and the average emotional value,
when monitoring personnel are engaged in important, difficult or intensive tasks,
the emotion excitation degree is 1.5 standard deviations higher than the average emotion excitation degree value in the calm state,
or the emotion-motivating degree is lower than the average emotion-motivating degree value in the calm state by 1 standard deviation,
or when the emotion titer is lower than the average emotion titer value by 1.5 standard deviations in the calm state, sending out alarm information;
when the monitoring person is engaged in ordinary work,
the emotion excitation degree is 1.5 standard deviations higher than the average emotion excitation degree value in the calm state,
or the emotion-motivating degree is lower than the average emotion-motivating degree value in the calm state by 1 standard deviation,
or when the emotion titer is lower than the average emotion titer value by 1.5 standard deviations in the calm state, sending out alarm information;
When the emotion excitation degree of the monitored personnel is 2 standard deviations higher than the average emotion excitation degree value in the calm state, and the emotion titer is 2 standard deviations lower than the average emotion effective value in the calm state, the monitored personnel generates potential threat to public safety, and sends out alarm information.
CN202110352232.3A 2021-03-31 2021-03-31 Emotion early warning method based on camera Active CN113143274B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110352232.3A CN113143274B (en) 2021-03-31 2021-03-31 Emotion early warning method based on camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110352232.3A CN113143274B (en) 2021-03-31 2021-03-31 Emotion early warning method based on camera

Publications (2)

Publication Number Publication Date
CN113143274A CN113143274A (en) 2021-07-23
CN113143274B true CN113143274B (en) 2023-11-10

Family

ID=76886333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110352232.3A Active CN113143274B (en) 2021-03-31 2021-03-31 Emotion early warning method based on camera

Country Status (1)

Country Link
CN (1) CN113143274B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115316991B (en) * 2022-01-06 2024-02-27 中国科学院心理研究所 Self-adaptive recognition early warning method for irritation emotion
CN114407832A (en) * 2022-01-24 2022-04-29 中国第一汽车股份有限公司 Monitoring method for preventing vehicle body from being scratched and stolen, vehicle body controller and vehicle

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101095612A (en) * 2006-06-28 2008-01-02 株式会社东芝 Apparatus and method for monitoring biological information
JP2012059107A (en) * 2010-09-10 2012-03-22 Nec Corp Emotion estimation device, emotion estimation method and program
CN104112055A (en) * 2013-04-17 2014-10-22 深圳富泰宏精密工业有限公司 System and method for analyzing and displaying emotion
CN107506716A (en) * 2017-08-17 2017-12-22 华东师范大学 A kind of contactless real-time method for measuring heart rate based on video image
CN108882883A (en) * 2015-12-09 2018-11-23 安萨尔集团有限公司 Parasympathetic autonomic nerves system is measured to while sympathetic autonomic nerves system to independent activities, related and analysis method and system
CN109670406A (en) * 2018-11-25 2019-04-23 华南理工大学 A kind of contactless emotion identification method of combination heart rate and facial expression object game user
CN109890289A (en) * 2016-12-27 2019-06-14 欧姆龙株式会社 Mood estimates equipment, methods and procedures
CN110200640A (en) * 2019-05-14 2019-09-06 南京理工大学 Contactless Emotion identification method based on dual-modality sensor
CN110422174A (en) * 2018-04-26 2019-11-08 李尔公司 Biometric sensor is merged to classify to Vehicular occupant state
CN110621228A (en) * 2017-05-01 2019-12-27 三星电子株式会社 Determining emotions using camera-based sensing
CN111881812A (en) * 2020-07-24 2020-11-03 中国中医科学院针灸研究所 Multi-modal emotion analysis method and system based on deep learning for acupuncture
CN112220455A (en) * 2020-10-14 2021-01-15 深圳大学 Emotion recognition method and device based on video electroencephalogram signals and computer equipment
CN112263252A (en) * 2020-09-28 2021-01-26 贵州大学 PAD (PAD application aided differentiation) emotion dimension prediction method based on HRV (high resolution video) features and three-layer SVR (singular value representation)
CN112507959A (en) * 2020-12-21 2021-03-16 中国科学院心理研究所 Method for establishing emotion perception model based on individual face analysis in video

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140221866A1 (en) * 2010-06-02 2014-08-07 Q-Tec Systems Llc Method and apparatus for monitoring emotional compatibility in online dating
TWI510216B (en) * 2013-04-15 2015-12-01 Chi Mei Comm Systems Inc System and method for displaying analysis of mood
US10285634B2 (en) * 2015-07-08 2019-05-14 Samsung Electronics Company, Ltd. Emotion evaluation
JP6985005B2 (en) * 2015-10-14 2021-12-22 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Emotion estimation method, emotion estimation device, and recording medium on which the program is recorded.
TW201801037A (en) * 2016-06-30 2018-01-01 泰金寶電通股份有限公司 Emotion analysis method and electronic apparatus thereof
JP7251392B2 (en) * 2019-08-01 2023-04-04 株式会社デンソー emotion estimation device

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101095612A (en) * 2006-06-28 2008-01-02 株式会社东芝 Apparatus and method for monitoring biological information
JP2012059107A (en) * 2010-09-10 2012-03-22 Nec Corp Emotion estimation device, emotion estimation method and program
CN104112055A (en) * 2013-04-17 2014-10-22 深圳富泰宏精密工业有限公司 System and method for analyzing and displaying emotion
CN108882883A (en) * 2015-12-09 2018-11-23 安萨尔集团有限公司 Parasympathetic autonomic nerves system is measured to while sympathetic autonomic nerves system to independent activities, related and analysis method and system
CN109890289A (en) * 2016-12-27 2019-06-14 欧姆龙株式会社 Mood estimates equipment, methods and procedures
CN110621228A (en) * 2017-05-01 2019-12-27 三星电子株式会社 Determining emotions using camera-based sensing
CN107506716A (en) * 2017-08-17 2017-12-22 华东师范大学 A kind of contactless real-time method for measuring heart rate based on video image
CN110422174A (en) * 2018-04-26 2019-11-08 李尔公司 Biometric sensor is merged to classify to Vehicular occupant state
CN109670406A (en) * 2018-11-25 2019-04-23 华南理工大学 A kind of contactless emotion identification method of combination heart rate and facial expression object game user
CN110200640A (en) * 2019-05-14 2019-09-06 南京理工大学 Contactless Emotion identification method based on dual-modality sensor
CN111881812A (en) * 2020-07-24 2020-11-03 中国中医科学院针灸研究所 Multi-modal emotion analysis method and system based on deep learning for acupuncture
CN112263252A (en) * 2020-09-28 2021-01-26 贵州大学 PAD (PAD application aided differentiation) emotion dimension prediction method based on HRV (high resolution video) features and three-layer SVR (singular value representation)
CN112220455A (en) * 2020-10-14 2021-01-15 深圳大学 Emotion recognition method and device based on video electroencephalogram signals and computer equipment
CN112507959A (en) * 2020-12-21 2021-03-16 中国科学院心理研究所 Method for establishing emotion perception model based on individual face analysis in video

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
International Journal of Distributed Sensor Networks.Photoplethysmography based psychological stress detection with pulse rate variability feature differences and elastic net.《International Journal of Distributed Sensor Networks》.2018,第1-14页. *
孔璐璐.基于面部表情和脉搏信息融合的驾驶人愤怒情绪研究.《中国优秀硕士学位论文全文数据库》.2014,第I138-308页. *
李昌竹,郑士春,陆梭等.心率变异性与人格的神经质之间关系研究.《心理与行为研究》.2020,275-280. *
陈明.《大数据技术概论》.北京:中国铁道出版社,2019,第120-121页. *

Also Published As

Publication number Publication date
CN113143274A (en) 2021-07-23

Similar Documents

Publication Publication Date Title
Haque et al. Deep multimodal pain recognition: a database and comparison of spatio-temporal visual modalities
Ngai et al. Emotion recognition based on convolutional neural networks and heterogeneous bio-signal data sources
US11065958B2 (en) Control system and method
Ramirez et al. Color analysis of facial skin: Detection of emotional state
CN110477925A (en) A kind of fall detection for home for the aged old man and method for early warning and system
CN113143274B (en) Emotion early warning method based on camera
EP2916724A1 (en) Method and device for determining vital parameters
Stemberger et al. Thermal imaging as a way to classify cognitive workload
DE112014006082T5 (en) Pulse wave measuring device, mobile device, medical equipment system and biological information communication system
US11928632B2 (en) Ocular system for deception detection
CN104331685A (en) Non-contact active calling method
US20210020295A1 (en) Physical function independence support device of physical function and method therefor
CN112489368A (en) Intelligent falling identification and detection alarm method and system
Mitsuhashi et al. Video-based stress level measurement using imaging photoplethysmography
CN108451494A (en) The method and system of time domain cardiac parameters are detected using pupillary reaction
CN110569968B (en) Method and system for evaluating entrepreneurship failure resilience based on electrophysiological signals
Trivedi Attention monitoring and hazard assessment with bio-sensing and vision: Empirical analysis utilizing CNNs on the kitti dataset
Nikolaiev et al. Non-contact video-based remote photoplethysmography for human stress detection
Vettivel et al. System for detecting student attention pertaining and alerting
Anumas et al. Driver fatigue monitoring system using video face images & physiological information
CN110600127A (en) Video acquisition and analysis system and method for realizing cognitive disorder screening function by video excitation of facial expressions
CN114098729B (en) Heart interval-based emotion state objective measurement method
CN113362951A (en) Human body infrared thermal structure attendance and health assessment and epidemic prevention early warning system and method
Zhou et al. End-to-end deep learning for stress recognition using remote photoplethysmography
Massoz Non-invasive, automatic, and real-time characterization of drowsiness based on eye closure dynamics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20220311

Address after: 100101 courtyard 16, lincui Road, Chaoyang District, Beijing

Applicant after: INSTITUTE OF PSYCHOLOGY, CHINESE ACADEMY OF SCIENCES

Address before: 101400 3rd floor, 13 Yanqi street, Yanqi Economic Development Zone, Huairou District, Beijing

Applicant before: Beijing JingZhan Information Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant