CN116311510A - Emotion detection method and system based on image acquisition - Google Patents
Emotion detection method and system based on image acquisition Download PDFInfo
- Publication number
- CN116311510A CN116311510A CN202310217661.9A CN202310217661A CN116311510A CN 116311510 A CN116311510 A CN 116311510A CN 202310217661 A CN202310217661 A CN 202310217661A CN 116311510 A CN116311510 A CN 116311510A
- Authority
- CN
- China
- Prior art keywords
- image
- emotion
- head
- face
- vibration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000008451 emotion Effects 0.000 title claims abstract description 178
- 238000001514 detection method Methods 0.000 title claims abstract description 43
- 230000001720 vestibular Effects 0.000 claims abstract description 16
- 238000004458 analytical method Methods 0.000 claims abstract description 13
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 238000012545 processing Methods 0.000 claims abstract description 12
- 238000000034 method Methods 0.000 claims description 36
- 238000012549 training Methods 0.000 claims description 15
- 239000011159 matrix material Substances 0.000 claims description 13
- 238000012360 testing method Methods 0.000 claims description 10
- 230000035945 sensitivity Effects 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 6
- 230000033001 locomotion Effects 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 claims description 4
- 230000010354 integration Effects 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 3
- 238000010191 image analysis Methods 0.000 claims description 3
- 238000010606 normalization Methods 0.000 claims description 3
- 238000012847 principal component analysis method Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 230000036651 mood Effects 0.000 abstract description 11
- 238000005259 measurement Methods 0.000 description 16
- 208000019901 Anxiety disease Diseases 0.000 description 12
- 230000036506 anxiety Effects 0.000 description 12
- 238000006243 chemical reaction Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 230000001629 suppression Effects 0.000 description 6
- 230000006870 function Effects 0.000 description 5
- 230000001537 neural effect Effects 0.000 description 5
- 230000002996 emotional effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000004962 physiological condition Effects 0.000 description 3
- 230000011514 reflex Effects 0.000 description 3
- 206010022998 Irritability Diseases 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 201000010099 disease Diseases 0.000 description 2
- 208000002173 dizziness Diseases 0.000 description 2
- 238000003703 image analysis method Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 208000007848 Alcoholism Diseases 0.000 description 1
- 206010027940 Mood altered Diseases 0.000 description 1
- 230000016571 aggressive behavior Effects 0.000 description 1
- 201000007930 alcohol dependence Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 230000037213 diet Effects 0.000 description 1
- 235000005911 diet Nutrition 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 208000035475 disorder Diseases 0.000 description 1
- 238000004134 energy conservation Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 230000006996 mental state Effects 0.000 description 1
- 230000007510 mood change Effects 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 230000000926 neurological effect Effects 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 230000035790 physiological processes and functions Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 102220067365 rs143592561 Human genes 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Psychiatry (AREA)
- Life Sciences & Earth Sciences (AREA)
- Social Psychology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Educational Technology (AREA)
- Biomedical Technology (AREA)
- Hospice & Palliative Care (AREA)
- Psychology (AREA)
- Child & Adolescent Psychology (AREA)
- Biophysics (AREA)
- Pathology (AREA)
- Developmental Disabilities (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Public Health (AREA)
- Veterinary Medicine (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a mood detection method and a mood detection system based on image acquisition, wherein the mood detection method comprises the following steps: acquiring a head image of a tested person; judging whether the characteristic extraction condition is met; if yes, extracting face characteristic information from the head image, identifying the face characteristic information, and acquiring identity information of a tested person for binding; in a preset time period, carrying out multiple analysis processing on the head vestibular image to obtain basic data at a plurality of different time points, and respectively displaying the basic data in the form of vibration images to obtain vibration images at different time points; acquiring head vibration frequency and head vibration amplitude at different time points through vibration images, and analyzing to acquire a plurality of potential emotion indexes at different time points; generating emotion characteristic normal models of the testee according to the potential emotion indexes to reflect the emotion of the testee; and obtaining emotion characteristic normal models of a plurality of testees, comparing and analyzing the group emotion, and checking out negative emotion outliers.
Description
Technical Field
The invention relates to the technical field of emotion detection, in particular to an emotion detection method and system based on image acquisition.
Background
In real work and life, the emotion of a person is under a stress fluctuation under the influence of the change of the surrounding environment. The fluctuation of emotion can cause various reactions to things, so that the disturbance of the fluctuation emotion is often caused when the true psychological state of a person needs to be judged in a specific environment, and especially when extreme fluctuation of emotion occurs, destructive negative effects are always accompanied.
Most current perspectives about emotion are considered to be "a natural instinctive mental state, derived from one's environment, mood, or relationship with another person. It ignores the driving force behind all positive, negative or neutral motivations, which is very important information as an agent wants to recognize and appreciate emotions. The detection and differentiation of emotions is very complex, and several decades ago, emotion began to be a concern as a very important complement to modern world technology. Imagine that in this world, if a machine is able to exactly feel the emotion of a human, and define the needs of the human, this special calculation can make the machine predict further behavior and consequences of the human, thus avoiding the latter more serious situation.
Emotion detection is a great weight in further research of human-computer interaction. The complexity of emotion makes this task of information collection more difficult in this work. Existing emotion detection methods capture emotion through a single emotion mechanism, such as only facial expression or only voice input, but detection results obtained by detecting emotion through a single emotion mechanism are not accurate due to complexity of emotion. In addition, in actual work, in order to better complete work, group cooperation is generally relied on, so if outliers with negative emotion can be detected from the group, important attention and work adjustment are carried out on the outliers, the work efficiency of the group is greatly improved, but the existing emotion detection method can only carry out emotion detection on individuals, and the outliers in the group cannot be detected.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an emotion detection method and system based on image acquisition, which are used for solving the technical problems that the detection result of the existing emotion detection method is inaccurate and the outlier in the population cannot be examined, so that the purposes of improving the accuracy of the emotion detection result and effectively examining the outlier in the population are achieved.
In order to solve the problems, the technical scheme adopted by the invention is as follows:
an emotion detection method based on image acquisition comprises the following steps:
acquiring a head image of a tested person when sitting at the end, which is acquired by an image acquisition device;
judging whether a feature extraction condition is met according to the position, background color and surface illuminance of the head of the tested person in the head image picture;
if yes, extracting face characteristic information from the head image, identifying the tested person according to the face characteristic information, acquiring identity information of the tested person and binding the identity information;
in a preset time period, carrying out multiple analysis processing on the head vestibular image to obtain basic data at a plurality of different time points, and respectively displaying the basic data at the plurality of different time points in a vibration image mode to obtain vibration images at the different time points;
acquiring head vibration frequency and head vibration amplitude at different time points through the vibration images at different time points, and analyzing the head vibration frequency and the head vibration amplitude to acquire a plurality of potential emotion indexes at different time points;
generating emotion characteristic normal models of the testee according to the multiple potential emotion indexes at different time points, and reflecting the emotion of the testee through the emotion characteristic normal models;
obtaining emotion characteristic normal models of a plurality of testees, comparing and analyzing the group emotion, and checking out negative emotion outliers;
and obtaining fluctuation amplitudes of the same index at different time points, and generating a emotion characteristic normal model of the tested person according to the fluctuation amplitudes and the multiple potential emotion indexes at different time points.
In a preferred embodiment of the present invention, when judging whether or not a feature extraction condition is satisfied, the method includes:
judging whether the head of the tested person in the head image picture is at the center position or not;
judging whether the background color of the head of the tested person in the head image picture is pure color or not;
judging whether the head acquisition distance of the tested person is within a preset distance range value or not;
judging whether the surface illuminance of the face of the detected person in the head image picture is within a preset surface illuminance range value or not;
if the above-mentioned judging conditions are all satisfied, the feature extraction condition is considered to be satisfied.
As a preferred embodiment of the present invention, after identifying the subject according to the face feature information, the method includes:
judging whether the tested person is a registered person, if so, automatically extracting personnel information from a registered information base, and binding;
if not, prompting the testee to register, and binding according to personnel information input during registration;
after binding is completed, the subsequently detected data are collected to corresponding binding personnel.
In a preferred embodiment of the present invention, when judging whether the subject is a registered person, the method includes:
obtaining a preset face recognition model by adopting a principal component analysis method, and training the preset face recognition model through a face recognition training sample to obtain a registrant recognition model;
and identifying the face characteristic information by using the registered person identification model, determining whether matched persons exist in the registered information base according to an identification result, and if so, considering the detected person as the registered person.
In a preferred embodiment of the present invention, when obtaining the registered person identification model, the method includes:
performing geometric normalization processing on face images in the registered information base to obtain normalized face images, and storing each normalized face image in a column vector group { Z } according to rows 1 ,Z 2 ,Z 3 …, obtaining a corresponding face matrix;
performing K-L transformation on the face matrix to obtain an average registered face, wherein the average registered face is specifically shown in a formula 1:
wherein H is the number of standardized face image line pixels, H is the number of standardized face image line pixels, Z i Corresponding face images in the column vector group;
obtaining covariance matrixes of all face recognition training samples according to the average registered face, wherein the covariance matrixes are specifically shown in a formula 2:
wherein T is matrix transpose, mu Z The number of the row pixel points of the average registered face is the number;
acquiring each face matrix Z i And average registered face mu Z As shown in the specific formula 3:
E=(E 1 ,...,E n ) (3);
wherein E is i =Z i -μ Z ;
Orthonormal is carried out on the difference vector to obtain a formula 4:
projecting all face recognition training sample images into a feature subspace P to obtain a registrant recognition model formed by the coordinate coefficient of each face recognition training sample image in the feature subspace P, wherein the registrant recognition model is specifically shown as a formula 5:
y i =P T (Z i -μ Z )=P T E i (i=1,2,L,H) (5);
in the formula, P is E R J(H-1) ,Z i ,μ Z ,E i ∈R J ,y i ∈R (H-1) ;
Each standardized face image is composed of h x j pixel points.
As a preferred embodiment of the present invention, when the face feature information is identified using the registrant identification model, the method includes:
projecting the face image in the face feature information into a feature subspace P to obtain a corresponding coordinate coefficient, wherein the coordinate coefficient is specifically shown as a formula 6:
y test =P T (Z test -μ Z ) (6);
wherein Z is test The face image is a face image in the face characteristic information;
according to the coordinate coefficient, in the registered person identification model, determining whether a face image meeting the minimum distance exists by adopting a Euclidean distance solving objective function, and if so, considering the detected person as a registered person, wherein the method is specifically shown as a formula 7:
in a preferred embodiment of the present invention, when obtaining vibration images at different time points, the method includes:
analyzing the motion data of each point of the head of the testee in the head vestibular image by using a vibration image technology in a preset time period, and determining the amplitude parameter of the head vibration and the frequency parameter of the head vibration at each point;
and obtaining vibration images at different time points according to the amplitude parameter of the head vibration and the frequency parameter of the head vibration.
In a preferred embodiment of the present invention, when obtaining vibration images at different time points, the method further comprises:
obtaining a head vibration amplitude image according to the amplitude parameter of the head vibration, wherein the head vibration amplitude image is specifically shown in a formula 8:
obtaining a head vibration frequency image according to the frequency parameter of the head vibration, wherein the head vibration frequency image is specifically shown in a formula 9:
wherein x, y are the coordinates of the point, O x,y,i Is the signal amplitude of x, y point in the i-th frame, O x,y,(i+1) Is the signal amplitude of x, y points in the (i+1) th frame, M is the number of frames for averaging the amplitude components of the head shake image, G in Is the signal vibration frequency in the head vestibular image;
wherein the vibration image includes a head vibration amplitude image and a head vibration frequency image.
As a preferred embodiment of the present invention, when obtaining a plurality of potential emotion indexes at different time points, it includes:
and obtaining the negative emotion index, the positive emotion index, the physiological index, the response sensitivity index and the response degree index at different time points.
An emotion detection system based on image acquisition, comprising:
and the video acquisition module is used for: the head image acquisition device is used for acquiring the head image of the tested person when sitting at the end, acquired by the image acquisition device;
and an image analysis module: judging whether the feature extraction condition is met according to the position, background color and surface illuminance of the head of the tested person in the head image picture; if yes, extracting face characteristic information from the head image, identifying the tested person according to the face characteristic information, acquiring identity information of the tested person and binding the identity information; in a preset time period, carrying out multiple analysis processing on the head vestibular image to obtain basic data at a plurality of different time points, and respectively displaying the basic data at the plurality of different time points in a vibration image mode to obtain vibration images at the different time points;
emotion analysis module: the method comprises the steps of acquiring head vibration frequency and head vibration amplitude at different time points through vibration images at different time points, and analyzing the head vibration frequency and the head vibration amplitude to acquire a plurality of potential emotion indexes at different time points;
and a data integration module: generating a emotion characteristic normal model of the testee according to a plurality of potential emotion indexes at different time points, and reflecting the emotion of the testee through the emotion characteristic normal model; obtaining emotion characteristic normal models of a plurality of testees, comparing and analyzing the group emotion, and checking out negative emotion outliers;
and obtaining fluctuation amplitudes of the same index at different time points, and generating a emotion characteristic normal model of the tested person according to the fluctuation amplitudes and the multiple potential emotion indexes at different time points.
Compared with the prior art, the invention has the beneficial effects that:
(1) According to the invention, on the basis that the vibration image has real-time performance and objectivity for the personal emotion detection result, a measured person management function is introduced, so that the distinction and collection of multiple measurement results of different measured persons are realized, the multiple measurement results of a single measured person are creatively compared, and the single measurement results of different measured persons are compared, so that an effective emotion file is established in real time for a group through the data comparison of multiple dimensions, a manager is helped to check out the person with the most-biased negative emotion in the group, the trend of the personal emotion is prejudged, and the important attention and the work adjustment are purposefully carried out on related persons, so that the safety management of the group emotion is truly realized;
(2) The invention also adopts an image analysis method and environment compensation equipment to assist in obtaining high-quality head vibration images in the emotion detection process, and improves the accuracy of detection results.
The invention is described in further detail below with reference to the drawings and the detailed description.
Drawings
FIG. 1 is a step diagram of an emotion detection method based on image acquisition according to an embodiment of the present invention;
FIG. 2-is a composite score trend graph of an embodiment of the present invention;
FIG. 3-is a personal mood profile and personal mood variation profile in accordance with an embodiment of the present invention;
FIG. 4 is a personal emotion graph of an embodiment of the present invention;
FIG. 5-is a graph of personal emotion measurements for an embodiment of the present invention;
FIG. 6-is a personal emotion index radar graph of an embodiment of the present invention;
FIG. 7-is a population emotion profile of an embodiment of the present invention.
Detailed Description
The emotion detection method based on image acquisition provided by the invention, as shown in figure 1, comprises the following steps:
step S1: acquiring a head image of a tested person when sitting at the end, which is acquired by an image acquisition device;
step S2: judging whether the feature extraction condition is met according to the position, background color and surface illuminance of the head of the tested person in the head image picture;
step S3: if yes, extracting face characteristic information from the head image, identifying the tested person according to the face characteristic information, acquiring identity information of the tested person and binding the identity information;
step S4: in a preset time period, carrying out multiple analysis processing on the head vestibular image to obtain basic data at a plurality of different time points, and respectively displaying the basic data at the plurality of different time points in a vibration image mode to obtain vibration images at the different time points;
step S5: acquiring head vibration frequency and head vibration amplitude at different time points through vibration images at different time points, and analyzing the head vibration frequency and the head vibration amplitude to acquire a plurality of potential emotion indexes at different time points;
step S6: generating emotion characteristic normal models of the testee according to a plurality of potential emotion indexes at different time points, and reflecting the emotion of the testee through the emotion characteristic normal models;
step S7: obtaining emotion characteristic normal models of a plurality of testees, comparing and analyzing the group emotion, and checking out negative emotion outliers;
the method comprises the steps of obtaining fluctuation amplitude of the same index at different time points, and generating emotion characteristic normal models of a tested person according to the fluctuation amplitude and a plurality of potential emotion indexes at different time points.
In the above step S1, when capturing the head image, the subject needs to sit at his or her end, and must not rest on the seat back, and the head is not supported by the arms, and the face of the subject should be kept level with the image capturing device.
In the step S2, when judging whether the feature extraction condition is satisfied, the method includes:
judging whether the head of the tested person in the head image picture is at the center position or not;
judging whether the background color of the head of the tested person in the head image picture is a solid color or not;
judging whether the head acquisition distance of the tested person is within a preset distance range value;
judging whether the surface illuminance of the face of the tested person in the head image picture is within a preset surface illuminance range value;
if the above-mentioned judging conditions are all satisfied, the feature extraction condition is considered to be satisfied.
Specifically, the head of the tested person is controlled at the center position, so that the image acquisition device can comprehensively and fully acquire the vestibular characteristics of the head of the tested person, and inaccurate detection results due to missing part of characteristics in the acquisition process are effectively avoided.
Specifically, the background color of the head of the tested person is made to be solid color, so that the image foreground is ensured to be only the head of the tested person, and if the background environment is disordered, the background environment can be shielded by an environment compensation device (background curtain). When the head acquisition distance of the tested person is not within the preset distance range value, the requirement of the detection environment can be met by controlling the scaling function of the video acquisition module. The surface illuminance of the face of the tested person is between 400Lx and 800Lx, and when the surface illuminance of the face is detected to be insufficient, the video acquisition module or the environment compensation device (light supplementing lamp) can be directly adjusted so that the surface illuminance meets the requirement. The image acquisition device should remain stable while acquiring the head image, the image should not be dithered, and the periphery should remain quiet without interference during the detection process.
In the step S3, after identifying the subject according to the face feature information, the method includes:
judging whether the tested person is a registered person, if so, automatically extracting personnel information from the registered information, and binding;
if not, prompting the testee to register, and binding according to personnel information input during registration;
after binding is completed, the subsequently detected data are collected to corresponding binding personnel.
Further, when judging whether the subject is a registered person, the method includes:
obtaining a preset face recognition model by adopting a principal component analysis method, and training the preset face recognition model through a face recognition training sample to obtain a registered person recognition model;
and identifying the face characteristic information by using a registered person identification model, determining whether matched persons exist in a registered information base according to an identification result, and if so, considering the detected person as the registered person.
Further, when the registered person identification model is obtained, the method includes:
performing geometric normalization processing on face images in a registered information base to obtain normalized face images, and storing each normalized face image in a column vector group { Z } according to rows 1 ,Z 2 ,Z 3 …, obtaining a corresponding face matrix;
carrying out K-L transformation on the face matrix to obtain an average registered face, wherein the average registered face is specifically shown as a formula 1:
wherein H is the number of standardized face image line pixels, H is the number of standardized face image line pixels, Z i Corresponding face images in the column vector group;
obtaining covariance matrix of all face recognition training samples according to average registered face, such as common formula
Formula 2:
wherein T is matrix transpose, mu Z The number of the row pixel points of the average registered face is the number;
acquiring each face matrix Z i And average registered face mu Z As shown in the specific formula 3:
E=(E 1 ,...,E n ) (3);
wherein E is i =Z i -μ Z ;
Orthonormal is carried out on the difference vector to obtain a formula 4:
projecting all face recognition training sample images into a feature subspace P to obtain a registrant recognition model formed by the coordinate coefficient of each face recognition training sample image in the feature subspace P, wherein the registrant recognition model is specifically shown as a formula 5:
y i =P T (Z i -μ Z )=P T E i (i=1,2,L,H) (5);
in the formula, P is E R J(H-1 ),Z i ,μ Z ,E i ∈R J ,y i ∈R (H-1) ;
Each normalized face image is composed of h×j pixels.
Further, when the face feature information is identified by using the registrant identification model, the method includes:
the face image in the face feature information is projected into a feature subspace P to obtain a corresponding coordinate coefficient, and the coordinate coefficient is specifically shown as a formula 6:
y test =P T (Z test -μ Z ) (6);
wherein Z is test The face image is a face image in the face characteristic information;
according to the coordinate coefficient, in a registered person identification model, determining whether a face image meeting the minimum distance exists by adopting a Euclidean distance solving objective function, and if so, considering the person to be detected as a registered person, wherein the method is specifically shown as a formula 7:
in the step S4, when obtaining the vibration image at different time points, the method includes:
in a preset time period, analyzing the motion data of each point of the head of the tested person in the head vestibular image by using a vibration image technology, and determining the amplitude parameter of the head vibration and the frequency parameter of the head vibration at each point;
and obtaining vibration images at different time points according to the amplitude parameter of the head vibration and the frequency parameter of the head vibration.
Further, when obtaining the vibration image at different time points, the method further comprises:
obtaining a head vibration amplitude image according to the amplitude parameter of the head vibration, wherein the head vibration amplitude image is specifically shown in a formula 8:
obtaining a head vibration frequency image according to the frequency parameter of the head vibration, specifically as shown in a formula 9:
wherein x, y are the coordinates of the point, O x,y,i Is the signal amplitude of x, y point in the i-th frame, O x,y,(i+1) Is the signal amplitude of x, y points in the (i+1) th frame, M is the number of frames for averaging the amplitude components of the head shake image, G in Is the signal vibration frequency in the head vestibular image;
the vibration image comprises a head vibration amplitude image and a head vibration frequency image.
In the step S4, a vibration image is generated based on the Vestibular Emotional Reflex (VER) theory, which indicates that there is a direct connection between the emotion and the body movement of the person, and when the emotion of the person fluctuates, the person is accompanied with uncontrollable autologous stress, so that the head and neck generate micro-vibration. When a person is anxious, vestibular Emotional Reflex (VER) which cannot be consciously controlled can acquire balance of human thermodynamic energy conservation law through fine vibration (physical work) of the head and the body. The light spot changes generated by the skin micro-vibration can be observed and recorded by a modern video acquisition module to generate vibration images.
The vibration image is an image representing spatial and temporal parameters of motion and vibration of an object, one vibration image is an average rate calculation of video image change at each point over a period of time, and is an information and probability display of thermodynamic processes in which a person is in a static equilibrium state. The width of the vibration image represents the vibration amplitude, and the wider the vibration amplitude is, the larger the vibration amplitude is, and the smaller the vibration amplitude is. The color represents the frequency, the lower the frequency, the colder the hue, i.e. the vibration image is biased to blue and purple, and the higher the frequency, the warmer the hue, i.e. the vibration image is biased to yellow and red.
The vibration image can show real physical and psychological physiological phenomena, and after a large number of human body experiments, comparison and statistical analysis are carried out, the emotional states such as stress and pressure of a person are quantitatively measured, and the vibration pattern is generated by measuring the image of the vibration rate of the skin pixels, so that a correlation diagram of potential emotional reflex of the person is established. The VER theory shows that when the person is tired and the emotion is stable and smooth, the vibration frequency of the head and the face is between 1 and 5Hz, and the emotion analysis module outputs images to display blue color. If the aggressiveness is strong or the anxiety is stressed, the vibration frequency is increased to between 5 and 10Hz, the emotion analysis module outputs images to display a yellow-red system, the vibration frequency is rapidly switched, and the amplitude is also increased.
In the step S5, when obtaining a plurality of potential emotion indexes at different time points, the method includes:
and obtaining the negative emotion index, the positive emotion index, the physiological index, the response sensitivity index and the response degree index at different time points.
Specifically, the measurement ranges of all indexes are all within the interval of 0-100, the reference ranges of all indexes are defined on the basis of statistics of big data samples, and the measurement values are generally within the range, so that the state is normal. But the single index does not represent the whole state, and comprehensive judgment is needed according to the correlation among indexes.
Further, when obtaining the negative emotion indexes at different time points, the method comprises the following steps:
the attack characteristics of the tested person are obtained, the attack characteristics represent the straightness, the irritability and the irritability degree of the tested person, and the higher the value of the attack characteristics is, the more obvious the attack characteristics of the tested person are indicated, and the specific formula is shown as a formula 10:
wherein G is k G is the maximum frequency in the frequency distribution density histogram o G is the frequency number of the o-th frequency in the frequency distribution density histogram obtained in N frames oj For the vibration image processing frequency, j is the frequency number of the frame difference exceeding a threshold value in N frames;
the pressure characteristic of the tested person is obtained, the pressure characteristic represents the pressure bearing degree of the tested person, and the higher the numerical value of the pressure characteristic is, the greater the pressure bearing of the tested person is, and the more specifically, as shown in a formula 11:
in the method, in the process of the invention,for the total amplitude of the vibration frequency of the i-th line on the left side of the object,/>For the total amplitude of the vibration frequency of the i-th line on the right side of the object, +.>Is->And->Maximum value of>Maximum value of vibration frequency of i-th line on left side of object,/>Maximum value of vibration frequency of ith line on right side of object,/>Is->And->N is the effective line number of the object image;
obtaining anxiety characteristics of the tested person, wherein the anxiety characteristics represent that the tested person is in a state of tension, worry, anxiety or anxiety, and the higher the value of the anxiety characteristics is, the greater the anxiety degree of the tested person is, specifically as shown in a formula 12:
in which Q i (g) Spectral power distribution g for vibration image frequency max The maximum frequency in the frequency spectrum of the vibration image frequency distribution;
obtaining suspicious characteristics of the subject, wherein the suspicious characteristics are defined as average values of the sum of attack characteristics, stress characteristics and anxiety characteristics, and are used for representing the negative emotion level of the subject, and the suspicious characteristics are specifically shown in formula 13:
wherein R1 is an attack feature, R2 is a stress feature, and R3 is an anxiety feature;
wherein the negative mood indicators include aggression features, stress features, anxiety features, and suspicious features.
Specifically, the stress may be mental stress, living stress, work and study stress, disease stress, and the like.
Further, when obtaining the positive emotion indicators at different time points, the method includes:
obtaining the balance characteristic of the tested person, wherein the balance characteristic is used for representing the balance state of the tested person, and if the value of the balance characteristic is lower, the condition that the tested person possibly has dizziness or limb incompatibility is indicated, and the balance characteristic is specifically shown as a formula 14:
R5=Na=(100-2B s )% (14);
wherein B is s Is the sum of variation values of emotion parameters;
the confidence characteristic of the tested person is obtained, the confidence characteristic represents the confidence level of the tested person, the lower the value of the confidence characteristic is, the characteristics of the tested person with the self-help are indicated, the higher the value of the confidence characteristic is, the characteristics of the tested person with the self-help are indicated, and the specific formula is shown in the formula 15:
in the formula, |E li -E ri I is the difference of the average values of the vibration amplitudes at the left side and the right side of the vibration image of each line, and V is li -V ri The I is the difference between the maximum frequencies of the vibration amplitudes at the left side and the right side of the vibration image of each line, and M is the frame number of the processing process;
obtaining vitality characteristics of the tested person, wherein the vitality characteristics represent vitality states of the tested person, and the lower the numerical value of the vitality characteristics is, the insufficient vitality of the tested person is indicated, and the vitality characteristics are shown in a formula 16:
wherein Z is the maximum count value in the frequency histogram, ω is the standard deviation of the vibration image frequency calculated by the frequency histogram, G ps The maximum value of the input frequency of the vibration image;
obtaining self-regulation characteristics of the tested person, wherein the self-regulation characteristics represent the capacity of the tested person to regulate emotion or control self-behaviors and language, the lower the value of the self-regulation characteristics is, the insufficient self-regulation capacity of the tested person is indicated, and the higher the value of the self-regulation characteristics is, the tested person is indicated to have compulsive disorder tendency, and the specific formula is shown as the formula 17:
wherein R5 is the average value of the balance characteristics in the measurement process, dR5 is the variation range of the balance characteristics, R6 is the average value of the confidence characteristics in the measurement process, and dR6 is the variation range of the confidence characteristics;
wherein the positive emotion indicators include balance characteristics, confidence characteristics, vitality characteristics, and self-regulating characteristics.
Specifically, the reason for dizziness or limb disharmony of the tested person may be derived from physiology or psychology, and can be judged by combining other characteristics. If the activity characteristics are low, the fatigue, diet, alcoholism and other reasons can cause the activity characteristics to be low; such as anxiety features and vigor features that are too high, may be caused by environmental factors, such as emergencies, etc. Insufficient vigor of the subject may be caused by disease, eating, and sleeping, or by the environment or some event in which the subject is located.
Further, when obtaining the physiological indexes at different time points, the method comprises the following steps:
obtaining the suppression feature of the tested person, wherein the suppression feature represents the suppression state of the tested person, and the higher the numerical value of the suppression feature is, the more the tested person is in a suppression self state, the pessimistic emotion is low, and the depression tendency is shown as a formula 18:
wherein G is 1 For frequency variation of vibration image, Y m The average period of the vibration image frequency change is Y, and the vibration image measurement period is Y;
obtaining a neural characteristic of the tested person, wherein the neural characteristic represents the sensitivity degree of the tested person, the higher the numerical value of the neural characteristic is, the tested person has a neural tendency, the lower the numerical value of the neural characteristic is, the wider the heart and chest of the tested person is, and the tested person is free from the restriction, as shown in a formula 19:
R10=Mt=10ω(R9) (19);
wherein ω (R9) is the standard deviation of the suppression feature;
wherein the physiological index includes a depression characteristic and a neuromorphic characteristic.
Further, when the reaction sensitivity index and the reaction degree index at different time points are obtained, it includes:
the reaction sensitivity index and the reaction degree index take 0 as a boundary, wherein when the reaction sensitivity index and the reaction degree index are larger than 0, the detected person is in a physiological health state, and when the reaction sensitivity index and the reaction degree index are smaller than 0, the detected person is in a sub-health state due to fatigue and has a slow reaction.
In the step S5, when obtaining a plurality of potential emotion indexes at different time points, the method further includes:
and obtaining fluctuation amplitudes of the same index at different time points, and generating emotion characteristic normal models of the testee according to the fluctuation amplitudes and a plurality of potential emotion indexes at different time points.
Specifically, the emotion trait norms include a composite score trend, an emotion profile, an emotion variation profile, a personality classification, a personal emotion measurement result, and a personal emotion index radar map.
The comprehensive scoring trend is that the comprehensive scoring trend of the personnel is shown in the form of a broken line graph, and the emotion fluctuation of the personnel can be easily seen through the color of the broken line nodes and the area where the broken line nodes are located, as shown in fig. 2.
The mood profile is a profile of an average of 10 mood parameters, positive mood including average (in%) of 4 parameters of balance, confidence, vigor, self-regulating ability; negative emotions include aggressiveness, stress, anxiety, and suspicious 4 parameter averages (in%); physiological conditions include mean (in%) of inhibition and neurological 2 parameters. In general, the positive emotion is about 50%, the negative emotion is about 25%, and the physiological condition is about 25%, which is a more ideal psychological and physiological state, as shown in fig. 3.
The emotion variation distribution map is a predictive index pie chart calculated according to the variability combination of 10 parameters, and the variability refers to the fluctuation amplitude of the same index in single measurement. The mood profile represents the trend of mood changes over a period of time in the future after the test (e.g., no emergency, about two weeks in the future). Normally, physiological conditions account for 50%, positive emotions account for 25% and negative emotions account for 25%. The larger the ratio of which portion, the larger the variation, i.e., the greater the instability, as shown in fig. 3.
The character classification is to calculate the rough characters of the tested person according to the detection records in the detection time range, and the characters are interpreted according to the quadrants of the emotion coordinates in the coordinate axes, as shown in fig. 4.
The results of the personal emotion measurements are shown in particular in fig. 5.
A personal emotion index radar map may be drawn from the personal emotion measurements within the statistical interval, as shown in fig. 6 in particular.
In the step S7, the data integration module performs a comparison analysis on the group emotion to obtain a group emotion distribution map, as shown in fig. 7. According to the comparison of the emotion distribution of the detected personnel, the emotion state of the detected personnel can be intuitively seen, the large round dots represent the emotion of the selected current personnel, the small round dots represent the emotion of other personnel, and if the emotion of the selected personnel is obviously separated and approaches to the 'negative' area of the lower right corner, the emotion state of the selected personnel is bad and much attention is needed. According to the quadrant of the emotion of the person to be detected, the stability and inward/outward of the emotion of the person can be seen.
The emotion detection system based on image acquisition provided by the invention comprises:
and the video acquisition module is used for: the head image acquisition device is used for acquiring the head image of the tested person when sitting at the end, acquired by the image acquisition device;
and an image analysis module: judging whether the feature extraction condition is met according to the position, background color and surface illuminance of the head of the tested person in the head image picture; if yes, extracting face characteristic information from the head image, identifying the tested person according to the face characteristic information, acquiring identity information of the tested person and binding the identity information; in a preset time period, carrying out multiple analysis processing on the head vestibular image to obtain basic data at a plurality of different time points, and respectively displaying the basic data at the plurality of different time points in a vibration image mode to obtain vibration images at the different time points;
emotion analysis module: the method comprises the steps of acquiring head vibration frequency and head vibration amplitude at different time points through vibration images at different time points, and analyzing the head vibration frequency and the head vibration amplitude to acquire a plurality of potential emotion indexes at different time points;
and a data integration module: generating emotion characteristic normal models of the testee according to a plurality of potential emotion indexes at different time points, and reflecting the emotion of the testee through the emotion characteristic normal models; obtaining emotion characteristic normal models of a plurality of testees, comparing and analyzing the group emotion, and checking out negative emotion outliers;
and obtaining fluctuation amplitudes of the same index at different time points, and generating emotion characteristic normal models of the testee according to the fluctuation amplitudes and a plurality of potential emotion indexes at different time points.
Compared with the prior art, the invention has the beneficial effects that:
(1) According to the invention, on the basis that the vibration image has real-time performance and objectivity for the personal emotion detection result, a measured person management function is introduced, so that the distinction and collection of multiple measurement results of different measured persons are realized, the multiple measurement results of a single measured person are creatively compared, and the single measurement results of different measured persons are compared, so that an effective emotion file is established in real time for a group through the data comparison of multiple dimensions, a manager is helped to check out the person with the most-biased negative emotion in the group, the trend of the personal emotion is prejudged, and the important attention and the work adjustment are purposefully carried out on related persons, so that the safety management of the group emotion is truly realized;
(2) The invention also adopts an image analysis method and environment compensation equipment to assist in obtaining high-quality head vibration images in the emotion detection process, and improves the accuracy of detection results.
The above embodiments are only preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, but any insubstantial changes and substitutions made by those skilled in the art on the basis of the present invention are intended to be within the scope of the present invention as claimed.
Claims (10)
1. An emotion detection method based on image acquisition is characterized by comprising the following steps:
acquiring a head image of a tested person when sitting at the end, which is acquired by an image acquisition device;
judging whether a feature extraction condition is met according to the position, background color and surface illuminance of the head of the tested person in the head image picture;
if yes, extracting face characteristic information from the head image, identifying the tested person according to the face characteristic information, acquiring identity information of the tested person and binding the identity information;
in a preset time period, carrying out multiple analysis processing on the head vestibular image to obtain basic data at a plurality of different time points, and respectively displaying the basic data at the plurality of different time points in a vibration image mode to obtain vibration images at the different time points;
acquiring head vibration frequency and head vibration amplitude at different time points through the vibration images at different time points, and analyzing the head vibration frequency and the head vibration amplitude to acquire a plurality of potential emotion indexes at different time points;
generating emotion characteristic normal models of the testee according to the multiple potential emotion indexes at different time points, and reflecting the emotion of the testee through the emotion characteristic normal models;
obtaining emotion characteristic normal models of a plurality of testees, comparing and analyzing the group emotion, and checking out negative emotion outliers;
and obtaining fluctuation amplitudes of the same index at different time points, and generating a emotion characteristic normal model of the tested person according to the fluctuation amplitudes and the multiple potential emotion indexes at different time points.
2. The image-acquisition-based emotion detection method according to claim 1, characterized by comprising, when judging whether or not a feature extraction condition is satisfied:
judging whether the head of the tested person in the head image picture is at the center position or not;
judging whether the background color of the head of the tested person in the head image picture is pure color or not;
judging whether the head acquisition distance of the tested person is within a preset distance range value or not;
judging whether the surface illuminance of the face of the detected person in the head image picture is within a preset surface illuminance range value or not;
if the above-mentioned judging conditions are all satisfied, the feature extraction condition is considered to be satisfied.
3. The image-acquisition-based emotion detection method according to claim 1, characterized by comprising, after identifying the subject from the face feature information:
judging whether the tested person is a registered person, if so, automatically extracting personnel information from a registered information base, and binding;
if not, prompting the testee to register, and binding according to personnel information input during registration;
after binding is completed, the subsequently detected data are collected to corresponding binding personnel.
4. The image-acquisition-based emotion detection method according to claim 3, characterized by comprising, in judging whether or not the subject is a registered person:
obtaining a preset face recognition model by adopting a principal component analysis method, and training the preset face recognition model through a face recognition training sample to obtain a registrant recognition model;
and identifying the face characteristic information by using the registered person identification model, determining whether matched persons exist in the registered information base according to an identification result, and if so, considering the detected person as the registered person.
5. The emotion detection method based on image acquisition according to claim 4, characterized by comprising, when obtaining a registered person identification model:
performing geometric normalization processing on face images in the registered information base to obtain normalized face images, and storing each normalized face image in a column vector group { Z } according to rows 1 ,Z 2 ,Z 3 …, obtaining a corresponding face matrix;
performing K-L transformation on the face matrix to obtain an average registered face, wherein the average registered face is specifically shown in a formula 1:
wherein H is the number of standardized face image line pixels, H is the number of standardized face image line pixels, Z i Corresponding face images in the column vector group;
obtaining covariance matrixes of all face recognition training samples according to the average registered face, wherein the covariance matrixes are specifically shown in a formula 2:
wherein T is matrix transpose, mu Z The number of the row pixel points of the average registered face is the number;
acquiring each face matrix Z i And average registered face mu Z As shown in the specific formula 3:
wherein E is i =Z i -μ Z ;
Orthonormal is carried out on the difference vector to obtain a formula 4:
projecting all face recognition training sample images into a feature subspace P to obtain a registrant recognition model formed by the coordinate coefficient of each face recognition training sample image in the feature subspace P, wherein the registrant recognition model is specifically shown as a formula 5:
y i =P T (Z i -μ Z )=P T E i (i=1,2,L,H) (5);
in the formula, P is E R J(H-1) ,Z i ,μ Z ,E i ∈R J ,y i ∈R (H-1) ;
Each standardized face image is composed of h x j pixel points.
6. The image acquisition-based emotion detection method according to claim 5, characterized by comprising, when the face feature information is recognized using the registered person recognition model:
projecting the face image in the face feature information into a feature subspace P to obtain a corresponding coordinate coefficient, wherein the coordinate coefficient is specifically shown as a formula 6:
y test =P T (Z test -μ Z ) (6);
wherein Z is test The face image is a face image in the face characteristic information;
according to the coordinate coefficient, in the registered person identification model, determining whether a face image meeting the minimum distance exists by adopting a Euclidean distance solving objective function, and if so, considering the detected person as a registered person, wherein the method is specifically shown as a formula 7:
7. the emotion detection method based on image acquisition according to claim 1, characterized by comprising, when obtaining vibration images at different time points:
analyzing the motion data of each point of the head of the testee in the head vestibular image by using a vibration image technology in a preset time period, and determining the amplitude parameter of the head vibration and the frequency parameter of the head vibration at each point;
and obtaining vibration images at different time points according to the amplitude parameter of the head vibration and the frequency parameter of the head vibration.
8. The image acquisition-based emotion detection method of claim 7, further comprising, when obtaining vibration images at different time points:
obtaining a head vibration amplitude image according to the amplitude parameter of the head vibration, wherein the head vibration amplitude image is specifically shown in a formula 8:
obtaining a head vibration frequency image according to the frequency parameter of the head vibration, wherein the head vibration frequency image is specifically shown in a formula 9:
wherein x, y are the coordinates of the point, O x,y,i Is the signal amplitude of x, y point in the i-th frame, O x,y,(i+1) Is the (i+1) th frameThe signal amplitude of the x, y points in (a), M is the number of frames used for averaging the amplitude components of the head shake image, G in Is the signal vibration frequency in the head vestibular image;
wherein the vibration image includes a head vibration amplitude image and a head vibration frequency image.
9. The image acquisition-based emotion detection method according to claim 1, characterized by comprising, when obtaining a plurality of potential emotion indicators at different points in time:
and obtaining the negative emotion index, the positive emotion index, the physiological index, the response sensitivity index and the response degree index at different time points.
10. An emotion detection system based on image acquisition, comprising:
and the video acquisition module is used for: the head image acquisition device is used for acquiring the head image of the tested person when sitting at the end, acquired by the image acquisition device;
and an image analysis module: judging whether the feature extraction condition is met according to the position, background color and surface illuminance of the head of the tested person in the head image picture; if yes, extracting face characteristic information from the head image, identifying the tested person according to the face characteristic information, acquiring identity information of the tested person and binding the identity information; in a preset time period, carrying out multiple analysis processing on the head vestibular image to obtain basic data at a plurality of different time points, and respectively displaying the basic data at the plurality of different time points in a vibration image mode to obtain vibration images at the different time points;
emotion analysis module: the method comprises the steps of acquiring head vibration frequency and head vibration amplitude at different time points through vibration images at different time points, and analyzing the head vibration frequency and the head vibration amplitude to acquire a plurality of potential emotion indexes at different time points;
and a data integration module: generating a emotion characteristic normal model of the testee according to a plurality of potential emotion indexes at different time points, and reflecting the emotion of the testee through the emotion characteristic normal model; obtaining emotion characteristic normal models of a plurality of testees, comparing and analyzing the group emotion, and checking out negative emotion outliers;
and obtaining fluctuation amplitudes of the same index at different time points, and generating a emotion characteristic normal model of the tested person according to the fluctuation amplitudes and the multiple potential emotion indexes at different time points.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310217661.9A CN116311510B (en) | 2023-03-08 | 2023-03-08 | Emotion detection method and system based on image acquisition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310217661.9A CN116311510B (en) | 2023-03-08 | 2023-03-08 | Emotion detection method and system based on image acquisition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116311510A true CN116311510A (en) | 2023-06-23 |
CN116311510B CN116311510B (en) | 2024-05-31 |
Family
ID=86782851
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310217661.9A Active CN116311510B (en) | 2023-03-08 | 2023-03-08 | Emotion detection method and system based on image acquisition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116311510B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2009140207A (en) * | 2009-10-26 | 2011-05-10 | Многопрофильное предприятие ООО "Элсис" (RU) | METHOD OF OBTAINING INFORMATION ABOUT PSYCHOPHYSIOLOGICAL CONDITION OF A LIVING OBJECT |
WO2012057646A1 (en) * | 2010-10-28 | 2012-05-03 | Общество С Ограниченной Ответственностью "Многопрофильное Предприятие "Элсис" | Method for obtaining information about the psychophysiological state of a living being |
CN110765838A (en) * | 2019-09-02 | 2020-02-07 | 合肥工业大学 | Real-time dynamic analysis method for facial feature region for emotional state monitoring |
CN111104815A (en) * | 2018-10-25 | 2020-05-05 | 北京入思技术有限公司 | Psychological assessment method and device based on emotion energy perception |
CN111631735A (en) * | 2020-04-26 | 2020-09-08 | 华东师范大学 | Abnormal emotion monitoring and early warning method based on video data vibration frequency |
CN112150759A (en) * | 2020-09-23 | 2020-12-29 | 北京安信智文科技有限公司 | Real-time monitoring and early warning system and method based on video algorithm |
CN112560770A (en) * | 2020-12-25 | 2021-03-26 | 温州晶彩光电有限公司 | Method and system for positioning intelligent colorful lamplight based on face recognition technology |
KR20210045552A (en) * | 2019-10-16 | 2021-04-27 | 임대열 | Method and apparatus for obtaining emotion and physical state information of human using machine learning |
CN113647950A (en) * | 2021-08-23 | 2021-11-16 | 北京图安世纪科技股份有限公司 | Psychological emotion detection method and system |
CN114792553A (en) * | 2021-12-28 | 2022-07-26 | 江苏博子岛智能产业技术研究院有限公司 | Method and system for screening psychological health group of students |
CN115736922A (en) * | 2022-11-16 | 2023-03-07 | 北京数智天安科技有限公司 | Emotion normalization monitoring system and method based on trusted environment |
-
2023
- 2023-03-08 CN CN202310217661.9A patent/CN116311510B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2009140207A (en) * | 2009-10-26 | 2011-05-10 | Многопрофильное предприятие ООО "Элсис" (RU) | METHOD OF OBTAINING INFORMATION ABOUT PSYCHOPHYSIOLOGICAL CONDITION OF A LIVING OBJECT |
WO2012057646A1 (en) * | 2010-10-28 | 2012-05-03 | Общество С Ограниченной Ответственностью "Многопрофильное Предприятие "Элсис" | Method for obtaining information about the psychophysiological state of a living being |
CN111104815A (en) * | 2018-10-25 | 2020-05-05 | 北京入思技术有限公司 | Psychological assessment method and device based on emotion energy perception |
CN110765838A (en) * | 2019-09-02 | 2020-02-07 | 合肥工业大学 | Real-time dynamic analysis method for facial feature region for emotional state monitoring |
KR20210045552A (en) * | 2019-10-16 | 2021-04-27 | 임대열 | Method and apparatus for obtaining emotion and physical state information of human using machine learning |
CN111631735A (en) * | 2020-04-26 | 2020-09-08 | 华东师范大学 | Abnormal emotion monitoring and early warning method based on video data vibration frequency |
CN112150759A (en) * | 2020-09-23 | 2020-12-29 | 北京安信智文科技有限公司 | Real-time monitoring and early warning system and method based on video algorithm |
CN112560770A (en) * | 2020-12-25 | 2021-03-26 | 温州晶彩光电有限公司 | Method and system for positioning intelligent colorful lamplight based on face recognition technology |
CN113647950A (en) * | 2021-08-23 | 2021-11-16 | 北京图安世纪科技股份有限公司 | Psychological emotion detection method and system |
CN114792553A (en) * | 2021-12-28 | 2022-07-26 | 江苏博子岛智能产业技术研究院有限公司 | Method and system for screening psychological health group of students |
CN115736922A (en) * | 2022-11-16 | 2023-03-07 | 北京数智天安科技有限公司 | Emotion normalization monitoring system and method based on trusted environment |
Non-Patent Citations (3)
Title |
---|
ASWIN K.M: "HERS:Human emotion recognition system", 《IEEE XPLORE》 * |
李博程: "基于前庭神经反射的汽车驾驶员情感计算模型设计及应用", 《上海汽车》, no. 11 * |
顾红梅;: "震动影像技术系统在学生课堂情绪状态评估中的应用", 中国人民公安大学学报(自然科学版), no. 04 * |
Also Published As
Publication number | Publication date |
---|---|
CN116311510B (en) | 2024-05-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109069072B (en) | Fraud detection system and method | |
JP6467966B2 (en) | Health care assistance device and health care assistance method | |
CN108697386B (en) | System and method for detecting physiological state | |
CN107809951B (en) | Psychophysiological detection (lie detection) method and apparatus for distortion using video-based detection of physiological signals | |
JP4401079B2 (en) | Subject behavior analysis | |
KR101689021B1 (en) | System for determining psychological state using sensing device and method thereof | |
CN111803032B (en) | Large-area observation method and system for suspected infection of Xinguan pneumonia | |
KR101500888B1 (en) | Method for obtaining information about the psychophysiological state of a living being | |
KR20160059414A (en) | Method and System for social relationship based on HRC by Micro movement of body | |
CN112957042A (en) | Non-contact target emotion recognition method and system | |
CN116311510B (en) | Emotion detection method and system based on image acquisition | |
CN111803031A (en) | Non-contact type drug addict relapse monitoring method and system | |
JP7306152B2 (en) | Emotion estimation device, emotion estimation method, program, information presentation device, information presentation method, and emotion estimation system | |
Vasavi et al. | Regression modelling for stress detection in humans by assessing most prominent thermal signature | |
Isaeva et al. | Making decisions in intelligent video surveillance systems based on modeling the pupillary response of a person | |
CN106725364B (en) | Controller fatigue detection method and system based on probability statistical method | |
Lamsal et al. | Drowsiness and tiredness detection system by observing the visible properties of human eyes | |
Hong | Classification of Emotional Stress and Physical Stress Using Electro-Optical Imaging Technology | |
Thannoon et al. | A survey on deceptive detection systems and technologies | |
Hajare et al. | Analyzing the Biosignal to Make Fatigue Measurement as a Parameter for Mood Detection | |
WO2020044249A1 (en) | Method and device for evaluating brain fatigue by using contactless imaging system | |
JP2024007407A (en) | Information processing device | |
Bai et al. | Real-time heart rate detection based on body surface video data | |
CN118098581A (en) | Emotion state monitoring method and system | |
AYYAPAN et al. | Drowsiness State Detection of Driver Using Eyelid Movement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |