CN116311510B - Emotion detection method and system based on image acquisition - Google Patents

Emotion detection method and system based on image acquisition Download PDF

Info

Publication number
CN116311510B
CN116311510B CN202310217661.9A CN202310217661A CN116311510B CN 116311510 B CN116311510 B CN 116311510B CN 202310217661 A CN202310217661 A CN 202310217661A CN 116311510 B CN116311510 B CN 116311510B
Authority
CN
China
Prior art keywords
image
emotion
head
face
vibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310217661.9A
Other languages
Chinese (zh)
Other versions
CN116311510A (en
Inventor
聂鹏
张起豪
刘娟
刘邵宾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Zhaobang Intelligent Polytron Technologies Inc
Original Assignee
Guangdong Zhaobang Intelligent Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Zhaobang Intelligent Polytron Technologies Inc filed Critical Guangdong Zhaobang Intelligent Polytron Technologies Inc
Priority to CN202310217661.9A priority Critical patent/CN116311510B/en
Publication of CN116311510A publication Critical patent/CN116311510A/en
Application granted granted Critical
Publication of CN116311510B publication Critical patent/CN116311510B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Psychiatry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Social Psychology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Educational Technology (AREA)
  • Biomedical Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Child & Adolescent Psychology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Developmental Disabilities (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a mood detection method and a mood detection system based on image acquisition, wherein the mood detection method comprises the following steps: acquiring a head image of a tested person; judging whether the characteristic extraction condition is met; if yes, extracting face characteristic information from the head image, identifying the face characteristic information, and acquiring identity information of a tested person for binding; in a preset time period, carrying out multiple analysis processing on the head vestibular image to obtain basic data at a plurality of different time points, and respectively displaying the basic data in the form of vibration images to obtain vibration images at different time points; acquiring head vibration frequency and head vibration amplitude at different time points through vibration images, and analyzing to acquire a plurality of potential emotion indexes at different time points; generating emotion characteristic normal models of the testee according to the potential emotion indexes to reflect the emotion of the testee; and obtaining emotion characteristic normal models of a plurality of testees, comparing and analyzing the group emotion, and checking out negative emotion outliers.

Description

Emotion detection method and system based on image acquisition
Technical Field
The invention relates to the technical field of emotion detection, in particular to an emotion detection method and system based on image acquisition.
Background
In real work and life, the emotion of a person is under a stress fluctuation under the influence of the change of the surrounding environment. The fluctuation of emotion can cause various reactions to things, so that the disturbance of the fluctuation emotion is often caused when the true psychological state of a person needs to be judged in a specific environment, and especially when extreme fluctuation of emotion occurs, destructive negative effects are always accompanied.
Most current perspectives about emotion are considered to be "a natural instinctive mental state, derived from one's environment, mood, or relationship with another person. It ignores the driving force behind all positive, negative or neutral motivations, which is very important information as an agent wants to recognize and appreciate emotions. The detection and differentiation of emotions is very complex, and several decades ago, emotion began to be a concern as a very important complement to modern world technology. Imagine that in this world, if a machine is able to exactly feel the emotion of a human, and define the needs of the human, this special calculation can make the machine predict further behavior and consequences of the human, thus avoiding the latter more serious situation.
Emotion detection is a great weight in further research of human-computer interaction. The complexity of emotion makes this task of information collection more difficult in this work. Existing emotion detection methods capture emotion through a single emotion mechanism, such as only facial expression or only voice input, but detection results obtained by detecting emotion through a single emotion mechanism are not accurate due to complexity of emotion. In addition, in actual work, in order to better complete work, group cooperation is generally relied on, so if outliers with negative emotion can be detected from the group, important attention and work adjustment are carried out on the outliers, the work efficiency of the group is greatly improved, but the existing emotion detection method can only carry out emotion detection on individuals, and the outliers in the group cannot be detected.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides an emotion detection method and system based on image acquisition, which are used for solving the technical problems that the detection result of the existing emotion detection method is inaccurate and the outlier in the population cannot be examined, so that the purposes of improving the accuracy of the emotion detection result and effectively examining the outlier in the population are achieved.
In order to solve the problems, the technical scheme adopted by the invention is as follows:
an emotion detection method based on image acquisition comprises the following steps:
Acquiring a head image of a tested person when sitting at the end, which is acquired by an image acquisition device;
Judging whether a feature extraction condition is met according to the position, background color and surface illuminance of the head of the tested person in the head image picture;
if yes, extracting face characteristic information from the head image, identifying the tested person according to the face characteristic information, acquiring identity information of the tested person and binding the identity information;
In a preset time period, carrying out multiple analysis processing on the head vestibular image to obtain basic data at a plurality of different time points, and respectively displaying the basic data at the plurality of different time points in a vibration image mode to obtain vibration images at the different time points;
Acquiring head vibration frequency and head vibration amplitude at different time points through the vibration images at different time points, and analyzing the head vibration frequency and the head vibration amplitude to acquire a plurality of potential emotion indexes at different time points;
Generating emotion characteristic normal models of the testee according to the multiple potential emotion indexes at different time points, and reflecting the emotion of the testee through the emotion characteristic normal models;
Obtaining emotion characteristic normal models of a plurality of testees, comparing and analyzing the group emotion, and checking out negative emotion outliers;
And obtaining fluctuation amplitudes of the same index at different time points, and generating a emotion characteristic normal model of the tested person according to the fluctuation amplitudes and the multiple potential emotion indexes at different time points.
In a preferred embodiment of the present invention, when judging whether or not a feature extraction condition is satisfied, the method includes:
judging whether the head of the tested person in the head image picture is at the center position or not;
judging whether the background color of the head of the tested person in the head image picture is pure color or not;
Judging whether the head acquisition distance of the tested person is within a preset distance range value or not;
judging whether the surface illuminance of the face of the detected person in the head image picture is within a preset surface illuminance range value or not;
If the above-mentioned judging conditions are all satisfied, the feature extraction condition is considered to be satisfied.
As a preferred embodiment of the present invention, after identifying the subject according to the face feature information, the method includes:
Judging whether the tested person is a registered person, if so, automatically extracting personnel information from a registered information base, and binding;
if not, prompting the testee to register, and binding according to personnel information input during registration;
after binding is completed, the subsequently detected data are collected to corresponding binding personnel.
In a preferred embodiment of the present invention, when judging whether the subject is a registered person, the method includes:
Obtaining a preset face recognition model by adopting a principal component analysis method, and training the preset face recognition model through a face recognition training sample to obtain a registrant recognition model;
And identifying the face characteristic information by using the registered person identification model, determining whether matched persons exist in the registered information base according to an identification result, and if so, considering the detected person as the registered person.
In a preferred embodiment of the present invention, when obtaining the registered person identification model, the method includes:
carrying out geometric normalization processing on the face images in the registered information base to obtain standardized face images, and storing each standardized face image in a column vector group { Z 1,Z2,Z3, … } according to rows to obtain a corresponding face matrix;
performing K-L transformation on the face matrix to obtain an average registered face, wherein the average registered face is specifically shown in a formula 1:
Wherein H is the number of standardized face image row pixels, H is the number of standardized face image row pixels, and Z i is the corresponding face image in the column vector group;
obtaining covariance matrixes of all face recognition training samples according to the average registered face, wherein the covariance matrixes are specifically shown in a formula 2:
Wherein T is matrix transposition, mu Z is the number of row pixel points of the average registered face;
The difference vector between each face matrix Z i and the average registered face μ Z is obtained, as shown in formula 3:
E=(E1,...,En) (3);
Wherein E i=ZiZ;
orthonormal is carried out on the difference vector to obtain a formula 4:
Projecting all face recognition training sample images into a feature subspace P to obtain a registrant recognition model formed by the coordinate coefficient of each face recognition training sample image in the feature subspace P, wherein the registrant recognition model is specifically shown as a formula 5:
yi=PT(ZiZ)=PTEi(i=1,2,...,h) (5);
wherein P is R j(h-1),ZiZ,Ei∈Rj,yi∈R(h-1);
each standardized face image is composed of h x j pixel points.
As a preferred embodiment of the present invention, when the face feature information is identified using the registrant identification model, the method includes:
projecting the face image in the face feature information into a feature subspace P to obtain a corresponding coordinate coefficient, wherein the coordinate coefficient is specifically shown as a formula 6:
ytest=PT(ZtestZ) (6);
wherein Z test is a face image in the face characteristic information;
according to the coordinate coefficient, in the registered person identification model, determining whether a face image meeting the minimum distance exists by adopting a Euclidean distance solving objective function, and if so, considering the detected person as a registered person, wherein the method is specifically shown as a formula 7:
in a preferred embodiment of the present invention, when obtaining vibration images at different time points, the method includes:
Analyzing the motion data of each point of the head of the testee in the head vestibular image by using a vibration image technology in a preset time period, and determining the amplitude parameter of the head vibration and the frequency parameter of the head vibration at each point;
and obtaining vibration images at different time points according to the amplitude parameter of the head vibration and the frequency parameter of the head vibration.
In a preferred embodiment of the present invention, when obtaining vibration images at different time points, the method further comprises:
obtaining a head vibration amplitude image according to the amplitude parameter of the head vibration, wherein the head vibration amplitude image is specifically shown in a formula 8:
obtaining a head vibration frequency image according to the frequency parameter of the head vibration, wherein the head vibration frequency image is specifically shown in a formula 9:
Where x, y are coordinates of points, O x,y,i is signal amplitude of x, y points in the i-th frame, O x,y,(i+1) is signal amplitude of x, y points in the (i+1) -th frame, M is frame number for averaging amplitude components of the head vibration image, and G in is signal vibration frequency in the head vestibular image;
Wherein the vibration image includes a head vibration amplitude image and a head vibration frequency image.
As a preferred embodiment of the present invention, when obtaining a plurality of potential emotion indexes at different time points, it includes:
And obtaining the negative emotion index, the positive emotion index, the physiological index, the response sensitivity index and the response degree index at different time points.
An emotion detection system based on image acquisition, comprising:
and the video acquisition module is used for: the head image acquisition device is used for acquiring the head image of the tested person when sitting at the end, acquired by the image acquisition device;
And an image analysis module: judging whether the feature extraction condition is met according to the position, background color and surface illuminance of the head of the tested person in the head image picture; if yes, extracting face characteristic information from the head image, identifying the tested person according to the face characteristic information, acquiring identity information of the tested person and binding the identity information; in a preset time period, carrying out multiple analysis processing on the head vestibular image to obtain basic data at a plurality of different time points, and respectively displaying the basic data at the plurality of different time points in a vibration image mode to obtain vibration images at the different time points;
emotion analysis module: the method comprises the steps of acquiring head vibration frequency and head vibration amplitude at different time points through vibration images at different time points, and analyzing the head vibration frequency and the head vibration amplitude to acquire a plurality of potential emotion indexes at different time points;
And a data integration module: generating a emotion characteristic normal model of the testee according to a plurality of potential emotion indexes at different time points, and reflecting the emotion of the testee through the emotion characteristic normal model; obtaining emotion characteristic normal models of a plurality of testees, comparing and analyzing the group emotion, and checking out negative emotion outliers;
And obtaining fluctuation amplitudes of the same index at different time points, and generating a emotion characteristic normal model of the tested person according to the fluctuation amplitudes and the multiple potential emotion indexes at different time points.
Compared with the prior art, the invention has the beneficial effects that:
(1) According to the invention, on the basis that the vibration image has real-time performance and objectivity for the personal emotion detection result, a measured person management function is introduced, so that the distinction and collection of multiple measurement results of different measured persons are realized, the multiple measurement results of a single measured person are creatively compared, and the single measurement results of different measured persons are compared, so that an effective emotion file is established in real time for a group through the data comparison of multiple dimensions, a manager is helped to check out the person with the most-biased negative emotion in the group, the trend of the personal emotion is prejudged, and the important attention and the work adjustment are purposefully carried out on related persons, so that the safety management of the group emotion is truly realized;
(2) The invention also adopts an image analysis method and environment compensation equipment to assist in obtaining high-quality head vibration images in the emotion detection process, and improves the accuracy of detection results.
The invention is described in further detail below with reference to the drawings and the detailed description.
Drawings
FIG. 1 is a step diagram of an emotion detection method based on image acquisition according to an embodiment of the present invention;
FIG. 2-is a composite score trend graph of an embodiment of the present invention;
FIG. 3-is a personal mood profile and personal mood variation profile in accordance with an embodiment of the present invention;
FIG. 4 is a personal emotion graph of an embodiment of the present invention;
FIG. 5-is a graph of personal emotion measurements for an embodiment of the present invention;
FIG. 6-is a personal emotion index radar graph of an embodiment of the present invention;
FIG. 7-is a population emotion profile of an embodiment of the present invention.
Detailed Description
The emotion detection method based on image acquisition provided by the invention, as shown in figure 1, comprises the following steps:
Step S1: acquiring a head image of a tested person when sitting at the end, which is acquired by an image acquisition device;
Step S2: judging whether the feature extraction condition is met according to the position, background color and surface illuminance of the head of the tested person in the head image picture;
Step S3: if yes, extracting face characteristic information from the head image, identifying the tested person according to the face characteristic information, acquiring identity information of the tested person and binding the identity information;
Step S4: in a preset time period, carrying out multiple analysis processing on the head vestibular image to obtain basic data at a plurality of different time points, and respectively displaying the basic data at the plurality of different time points in a vibration image mode to obtain vibration images at the different time points;
Step S5: acquiring head vibration frequency and head vibration amplitude at different time points through vibration images at different time points, and analyzing the head vibration frequency and the head vibration amplitude to acquire a plurality of potential emotion indexes at different time points;
Step S6: generating emotion characteristic normal models of the testee according to a plurality of potential emotion indexes at different time points, and reflecting the emotion of the testee through the emotion characteristic normal models;
step S7: obtaining emotion characteristic normal models of a plurality of testees, comparing and analyzing the group emotion, and checking out negative emotion outliers;
the method comprises the steps of obtaining fluctuation amplitude of the same index at different time points, and generating emotion characteristic normal models of a tested person according to the fluctuation amplitude and a plurality of potential emotion indexes at different time points.
In the above step S1, when capturing the head image, the subject needs to sit at his or her end, and must not rest on the seat back, and the head is not supported by the arms, and the face of the subject should be kept level with the image capturing device.
In the step S2, when judging whether the feature extraction condition is satisfied, the method includes:
Judging whether the head of the tested person in the head image picture is at the center position or not;
judging whether the background color of the head of the tested person in the head image picture is a solid color or not;
Judging whether the head acquisition distance of the tested person is within a preset distance range value;
judging whether the surface illuminance of the face of the tested person in the head image picture is within a preset surface illuminance range value;
If the above-mentioned judging conditions are all satisfied, the feature extraction condition is considered to be satisfied.
Specifically, the head of the tested person is controlled at the center position, so that the image acquisition device can comprehensively and fully acquire the vestibular characteristics of the head of the tested person, and inaccurate detection results due to missing part of characteristics in the acquisition process are effectively avoided.
Specifically, the background color of the head of the tested person is made to be solid color, so that the image foreground is ensured to be only the head of the tested person, and if the background environment is disordered, the background environment can be shielded by an environment compensation device (background curtain). When the head acquisition distance of the tested person is not within the preset distance range value, the requirement of the detection environment can be met by controlling the scaling function of the video acquisition module. The surface illuminance of the face of the tested person is between 400Lx and 800Lx, and when the surface illuminance of the face is detected to be insufficient, the video acquisition module or the environment compensation device (light supplementing lamp) can be directly adjusted so that the surface illuminance meets the requirement. The image acquisition device should remain stable while acquiring the head image, the image should not be dithered, and the periphery should remain quiet without interference during the detection process.
In the step S3, after identifying the subject according to the face feature information, the method includes:
judging whether the tested person is a registered person, if so, automatically extracting personnel information from the registered information, and binding;
if not, prompting the testee to register, and binding according to personnel information input during registration;
after binding is completed, the subsequently detected data are collected to corresponding binding personnel.
Further, when judging whether the subject is a registered person, the method includes:
obtaining a preset face recognition model by adopting a principal component analysis method, and training the preset face recognition model through a face recognition training sample to obtain a registered person recognition model;
And identifying the face characteristic information by using a registered person identification model, determining whether matched persons exist in a registered information base according to an identification result, and if so, considering the detected person as the registered person.
Further, when the registered person identification model is obtained, the method includes:
carrying out geometric normalization processing on face images in a registered information base to obtain standardized face images, and storing each standardized face image in a column vector group { Z 1,Z2,Z3, … } according to rows to obtain a corresponding face matrix;
Carrying out K-L transformation on the face matrix to obtain an average registered face, wherein the average registered face is specifically shown as a formula 1:
Wherein H is the number of standardized face image row pixels, H is the number of standardized face image row pixels, and Z i is the corresponding face image in the column vector group;
Obtaining covariance matrix of all face recognition training samples according to average registered face, such as common formula
Formula 2:
Wherein T is matrix transposition, mu Z is the number of row pixel points of the average registered face;
The difference vector between each face matrix Z i and the average registered face μ Z is obtained, as shown in formula 3:
E=(E1,...,En) (3);
Wherein E i=ZiZ;
Orthonormal is carried out on the difference vector to obtain a formula 4:
Projecting all face recognition training sample images into a feature subspace P to obtain a registrant recognition model formed by the coordinate coefficient of each face recognition training sample image in the feature subspace P, wherein the registrant recognition model is specifically shown as a formula 5:
yi=PT(ZiZ)=PTEi(i=1,2,...,h) (5);
wherein P is R j(h-1),ZiZ,Ei∈Rj,yi∈R(h-1);
each normalized face image is composed of h×j pixels.
Further, when the face feature information is identified by using the registrant identification model, the method includes:
The face image in the face feature information is projected into a feature subspace P to obtain a corresponding coordinate coefficient, and the coordinate coefficient is specifically shown as a formula 6:
ytest=PT(ZtestZ) (6);
wherein Z test is a face image in the face characteristic information;
According to the coordinate coefficient, in a registered person identification model, determining whether a face image meeting the minimum distance exists by adopting a Euclidean distance solving objective function, and if so, considering the person to be detected as a registered person, wherein the method is specifically shown as a formula 7:
In the step S4, when obtaining the vibration image at different time points, the method includes:
In a preset time period, analyzing the motion data of each point of the head of the tested person in the head vestibular image by using a vibration image technology, and determining the amplitude parameter of the head vibration and the frequency parameter of the head vibration at each point;
And obtaining vibration images at different time points according to the amplitude parameter of the head vibration and the frequency parameter of the head vibration.
Further, when obtaining the vibration image at different time points, the method further comprises:
obtaining a head vibration amplitude image according to the amplitude parameter of the head vibration, wherein the head vibration amplitude image is specifically shown in a formula 8:
obtaining a head vibration frequency image according to the frequency parameter of the head vibration, specifically as shown in a formula 9:
Where x, y are coordinates of points, O x,y,i is signal amplitude of x, y points in the i-th frame, O x,y,(i+1) is signal amplitude of x, y points in the (i+1) -th frame, M is frame number for averaging amplitude components of the head vibration image, and G in is signal vibration frequency in the head vestibular image;
the vibration image comprises a head vibration amplitude image and a head vibration frequency image.
In the step S4, a vibration image is generated based on the Vestibular Emotional Reflex (VER) theory, which indicates that there is a direct connection between the emotion and the body movement of the person, and when the emotion of the person fluctuates, the person is accompanied with uncontrollable autologous stress, so that the head and neck generate micro-vibration. When a person is anxious, vestibular Emotional Reflex (VER) which cannot be consciously controlled can acquire balance of human thermodynamic energy conservation law through fine vibration (physical work) of the head and the body. The light spot changes generated by the skin micro-vibration can be observed and recorded by a modern video acquisition module to generate vibration images.
The vibration image is an image representing spatial and temporal parameters of motion and vibration of an object, one vibration image is an average rate calculation of video image change at each point over a period of time, and is an information and probability display of thermodynamic processes in which a person is in a static equilibrium state. The width of the vibration image represents the vibration amplitude, and the wider the vibration amplitude is, the larger the vibration amplitude is, and the smaller the vibration amplitude is. The color represents the frequency, the lower the frequency, the colder the hue, i.e. the vibration image is biased to blue and purple, and the higher the frequency, the warmer the hue, i.e. the vibration image is biased to yellow and red.
The vibration image can show real physical and psychological physiological phenomena, and after a large number of human body experiments, comparison and statistical analysis are carried out, the emotional states such as stress and pressure of a person are quantitatively measured, and the vibration pattern is generated by measuring the image of the vibration rate of the skin pixels, so that a correlation diagram of potential emotional reflex of the person is established. The VER theory shows that when the person is tired and the emotion is stable and smooth, the vibration frequency of the head and the face is between 1 and 5Hz, and the emotion analysis module outputs images to display blue color. If the aggressiveness is strong or the anxiety is stressed, the vibration frequency is increased to between 5 and 10Hz, the emotion analysis module outputs images to display a yellow-red system, the vibration frequency is rapidly switched, and the amplitude is also increased.
In the step S5, when obtaining a plurality of potential emotion indexes at different time points, the method includes:
And obtaining the negative emotion index, the positive emotion index, the physiological index, the response sensitivity index and the response degree index at different time points.
Specifically, the measurement ranges of all indexes are all within the interval of 0-100, the reference ranges of all indexes are defined on the basis of statistics of big data samples, and the measurement values are generally within the range, so that the state is normal. But the single index does not represent the whole state, and comprehensive judgment is needed according to the correlation among indexes.
Further, when obtaining the negative emotion indexes at different time points, the method comprises the following steps:
the attack characteristics of the tested person are obtained, the attack characteristics represent the straightness, the irritability and the irritability degree of the tested person, and the higher the value of the attack characteristics is, the more obvious the attack characteristics of the tested person are indicated, and the specific formula is shown as a formula 10:
Wherein G k is the maximum frequency in the frequency distribution density histogram, G o is the frequency number of the o-th frequency in the frequency distribution density histogram obtained in N frames, G oj is the vibration image processing frequency, and j is the frequency number of the frame difference exceeding a threshold value in N frames;
the pressure characteristic of the tested person is obtained, the pressure characteristic represents the pressure bearing degree of the tested person, and the higher the numerical value of the pressure characteristic is, the greater the pressure bearing of the tested person is, and the more specifically, as shown in a formula 11:
In the method, in the process of the invention, Is the total amplitude of the vibration frequency of the ith line on the left side of the object,/>Is the total amplitude of the vibration frequency of the ith line on the right side of the object,/>For/>And/>Maximum value of/(I)Maximum value of vibration frequency of ith line on left side of object,/>Maximum value of vibration frequency of ith line on right side of object,/>For/>And/>N is the effective line number of the object image;
Obtaining anxiety characteristics of the tested person, wherein the anxiety characteristics represent that the tested person is in a state of tension, worry, anxiety or anxiety, and the higher the value of the anxiety characteristics is, the greater the anxiety degree of the tested person is, specifically as shown in a formula 12:
wherein Q i (g) is a spectral power distribution of the vibration image frequency, and g max is a maximum frequency in a spectrum of the vibration image frequency distribution;
Obtaining suspicious characteristics of the subject, wherein the suspicious characteristics are defined as average values of the sum of attack characteristics, stress characteristics and anxiety characteristics, and are used for representing the negative emotion level of the subject, and the suspicious characteristics are specifically shown in formula 13:
wherein R1 is an attack feature, R2 is a stress feature, and R3 is an anxiety feature;
wherein the negative mood indicators include aggression features, stress features, anxiety features, and suspicious features.
Specifically, the stress may be mental stress, living stress, work and study stress, disease stress, and the like.
Further, when obtaining the positive emotion indicators at different time points, the method includes:
Obtaining the balance characteristic of the tested person, wherein the balance characteristic is used for representing the balance state of the tested person, and if the value of the balance characteristic is lower, the condition that the tested person possibly has dizziness or limb incompatibility is indicated, and the balance characteristic is specifically shown as a formula 14:
R5=Na=(100-2Bs)% (14);
Wherein B s is the sum of variation values of emotion parameters;
the confidence characteristic of the tested person is obtained, the confidence characteristic represents the confidence level of the tested person, the lower the value of the confidence characteristic is, the characteristics of the tested person with the self-help are indicated, the higher the value of the confidence characteristic is, the characteristics of the tested person with the self-help are indicated, and the specific formula is shown in the formula 15:
Wherein, |E li-Eri | is the difference between the average values of the vibration amplitudes at the left and right sides of the vibration image of each line, |V li-Vri is the difference between the maximum frequencies of the vibration amplitudes at the left and right sides of the vibration image of each line, and M is the number of frames in the processing process;
Obtaining vitality characteristics of the tested person, wherein the vitality characteristics represent vitality states of the tested person, and the lower the numerical value of the vitality characteristics is, the insufficient vitality of the tested person is indicated, and the vitality characteristics are shown in a formula 16:
wherein Z is the maximum count value in the frequency histogram, ω is the standard deviation of the vibration image frequency calculated by the frequency histogram, and G ps is the maximum value of the vibration image input frequency;
Obtaining self-regulation characteristics of the tested person, wherein the self-regulation characteristics represent the capacity of the tested person to regulate emotion or control self-behaviors and language, the lower the value of the self-regulation characteristics is, the insufficient self-regulation capacity of the tested person is indicated, and the higher the value of the self-regulation characteristics is, the tested person is indicated to have compulsive disorder tendency, and the specific formula is shown as the formula 17:
Wherein R5 is the average value of the balance characteristics in the measurement process, dR5 is the variation range of the balance characteristics, R6 is the average value of the confidence characteristics in the measurement process, and dR6 is the variation range of the confidence characteristics;
wherein the positive emotion indicators include balance characteristics, confidence characteristics, vitality characteristics, and self-regulating characteristics.
Specifically, the reason for dizziness or limb disharmony of the tested person may be derived from physiology or psychology, and can be judged by combining other characteristics. If the activity characteristics are low, the fatigue, diet, alcoholism and other reasons can cause the activity characteristics to be low; such as anxiety features and vigor features that are too high, may be caused by environmental factors, such as emergencies, etc. Insufficient vigor of the subject may be caused by disease, eating, and sleeping, or by the environment or some event in which the subject is located.
Further, when obtaining the physiological indexes at different time points, the method comprises the following steps:
Obtaining the suppression feature of the tested person, wherein the suppression feature represents the suppression state of the tested person, and the higher the numerical value of the suppression feature is, the more the tested person is in a suppression self state, the pessimistic emotion is low, and the depression tendency is shown as a formula 18:
Wherein G 1 is the frequency variation of the vibration image, Y m is the average period of the frequency variation of the vibration image, and Y is the measurement period of the vibration image;
obtaining a neural characteristic of the tested person, wherein the neural characteristic represents the sensitivity degree of the tested person, the higher the numerical value of the neural characteristic is, the tested person has a neural tendency, the lower the numerical value of the neural characteristic is, the wider the heart and chest of the tested person is, and the tested person is free from the restriction, as shown in a formula 19:
R10=Mt=10ω(R9) (19);
wherein ω (R9) is the standard deviation of the suppression feature;
Wherein the physiological index includes a depression characteristic and a neuromorphic characteristic.
Further, when the reaction sensitivity index and the reaction degree index at different time points are obtained, it includes:
The reaction sensitivity index and the reaction degree index take 0 as a boundary, wherein when the reaction sensitivity index and the reaction degree index are larger than 0, the detected person is in a physiological health state, and when the reaction sensitivity index and the reaction degree index are smaller than 0, the detected person is in a sub-health state due to fatigue and has a slow reaction.
In the step S5, when obtaining a plurality of potential emotion indexes at different time points, the method further includes:
And obtaining fluctuation amplitudes of the same index at different time points, and generating emotion characteristic normal models of the testee according to the fluctuation amplitudes and a plurality of potential emotion indexes at different time points.
Specifically, the emotion trait norms include a composite score trend, an emotion profile, an emotion variation profile, a personality classification, a personal emotion measurement result, and a personal emotion index radar map.
The comprehensive scoring trend is that the comprehensive scoring trend of the personnel is shown in the form of a broken line graph, and the emotion fluctuation of the personnel can be easily seen through the color of the broken line nodes and the area where the broken line nodes are located, as shown in fig. 2.
The mood profile is a profile of an average of 10 mood parameters, positive mood including average (in%) of 4 parameters of balance, confidence, vigor, self-regulating ability; negative emotions include aggressiveness, stress, anxiety, and suspicious 4 parameter averages (in%); physiological conditions include mean (in%) of inhibition and neurological 2 parameters. In general, the positive emotion is about 50%, the negative emotion is about 25%, and the physiological condition is about 25%, which is a more ideal psychological and physiological state, as shown in fig. 3.
The emotion variation distribution map is a predictive index pie chart calculated according to the variability combination of 10 parameters, and the variability refers to the fluctuation amplitude of the same index in single measurement. The mood profile represents the trend of mood changes over a period of time in the future after the test (e.g., no emergency, about two weeks in the future). Normally, physiological conditions account for 50%, positive emotions account for 25% and negative emotions account for 25%. The larger the ratio of which portion, the larger the variation, i.e., the greater the instability, as shown in fig. 3.
The character classification is to calculate the rough characters of the tested person according to the detection records in the detection time range, and the characters are interpreted according to the quadrants of the emotion coordinates in the coordinate axes, as shown in fig. 4.
The results of the personal emotion measurements are shown in particular in fig. 5.
A personal emotion index radar map may be drawn from the personal emotion measurements within the statistical interval, as shown in fig. 6 in particular.
In the step S7, the data integration module performs a comparison analysis on the group emotion to obtain a group emotion distribution map, as shown in fig. 7. According to the comparison of the emotion distribution of the detected personnel, the emotion state of the detected personnel can be intuitively seen, the large round dots represent the emotion of the selected current personnel, the small round dots represent the emotion of other personnel, and if the emotion of the selected personnel is obviously separated and approaches to the 'negative' area of the lower right corner, the emotion state of the selected personnel is bad and much attention is needed. According to the quadrant of the emotion of the person to be detected, the stability and inward/outward of the emotion of the person can be seen.
The emotion detection system based on image acquisition provided by the invention comprises:
and the video acquisition module is used for: the head image acquisition device is used for acquiring the head image of the tested person when sitting at the end, acquired by the image acquisition device;
And an image analysis module: judging whether the feature extraction condition is met according to the position, background color and surface illuminance of the head of the tested person in the head image picture; if yes, extracting face characteristic information from the head image, identifying the tested person according to the face characteristic information, acquiring identity information of the tested person and binding the identity information; in a preset time period, carrying out multiple analysis processing on the head vestibular image to obtain basic data at a plurality of different time points, and respectively displaying the basic data at the plurality of different time points in a vibration image mode to obtain vibration images at the different time points;
Emotion analysis module: the method comprises the steps of acquiring head vibration frequency and head vibration amplitude at different time points through vibration images at different time points, and analyzing the head vibration frequency and the head vibration amplitude to acquire a plurality of potential emotion indexes at different time points;
and a data integration module: generating emotion characteristic normal models of the testee according to a plurality of potential emotion indexes at different time points, and reflecting the emotion of the testee through the emotion characteristic normal models; obtaining emotion characteristic normal models of a plurality of testees, comparing and analyzing the group emotion, and checking out negative emotion outliers;
And obtaining fluctuation amplitudes of the same index at different time points, and generating emotion characteristic normal models of the testee according to the fluctuation amplitudes and a plurality of potential emotion indexes at different time points.
Compared with the prior art, the invention has the beneficial effects that:
(1) According to the invention, on the basis that the vibration image has real-time performance and objectivity for the personal emotion detection result, a measured person management function is introduced, so that the distinction and collection of multiple measurement results of different measured persons are realized, the multiple measurement results of a single measured person are creatively compared, and the single measurement results of different measured persons are compared, so that an effective emotion file is established in real time for a group through the data comparison of multiple dimensions, a manager is helped to check out the person with the most-biased negative emotion in the group, the trend of the personal emotion is prejudged, and the important attention and the work adjustment are purposefully carried out on related persons, so that the safety management of the group emotion is truly realized;
(2) The invention also adopts an image analysis method and environment compensation equipment to assist in obtaining high-quality head vibration images in the emotion detection process, and improves the accuracy of detection results.
The above embodiments are only preferred embodiments of the present invention, and the scope of the present invention is not limited thereto, but any insubstantial changes and substitutions made by those skilled in the art on the basis of the present invention are intended to be within the scope of the present invention as claimed.

Claims (10)

1. An emotion detection method based on image acquisition is characterized by comprising the following steps:
Acquiring a head image of a tested person when sitting at the end, which is acquired by an image acquisition device;
Judging whether a feature extraction condition is met according to the position, background color and surface illuminance of the head of the tested person in the head image picture;
if yes, extracting face characteristic information from the head image, identifying the tested person according to the face characteristic information, acquiring identity information of the tested person and binding the identity information;
In a preset time period, carrying out multiple analysis processing on the head vestibular image to obtain basic data at a plurality of different time points, and respectively displaying the basic data at the plurality of different time points in a vibration image mode to obtain vibration images at the different time points;
Acquiring head vibration frequency and head vibration amplitude at different time points through the vibration images at different time points, and analyzing the head vibration frequency and the head vibration amplitude to acquire a plurality of potential emotion indexes at different time points;
Generating emotion characteristic normal models of the testee according to the multiple potential emotion indexes at different time points, and reflecting the emotion of the testee through the emotion characteristic normal models;
Obtaining emotion characteristic normal models of a plurality of testees, comparing and analyzing the group emotion, and checking out negative emotion outliers;
And obtaining fluctuation amplitudes of the same index at different time points, and generating a emotion characteristic normal model of the tested person according to the fluctuation amplitudes and the multiple potential emotion indexes at different time points.
2. The image-acquisition-based emotion detection method according to claim 1, characterized by comprising, when judging whether or not a feature extraction condition is satisfied:
judging whether the head of the tested person in the head image picture is at the center position or not;
judging whether the background color of the head of the tested person in the head image picture is pure color or not;
Judging whether the head acquisition distance of the tested person is within a preset distance range value or not;
judging whether the surface illuminance of the face of the detected person in the head image picture is within a preset surface illuminance range value or not;
If the above-mentioned judging conditions are all satisfied, the feature extraction condition is considered to be satisfied.
3. The image-acquisition-based emotion detection method according to claim 1, characterized by comprising, after identifying the subject from the face feature information:
Judging whether the tested person is a registered person, if so, automatically extracting personnel information from a registered information base, and binding;
if not, prompting the testee to register, and binding according to personnel information input during registration;
after binding is completed, the subsequently detected data are collected to corresponding binding personnel.
4. The image-acquisition-based emotion detection method according to claim 3, characterized by comprising, in judging whether or not the subject is a registered person:
Obtaining a preset face recognition model by adopting a principal component analysis method, and training the preset face recognition model through a face recognition training sample to obtain a registrant recognition model;
And identifying the face characteristic information by using the registered person identification model, determining whether matched persons exist in the registered information base according to an identification result, and if so, considering the detected person as the registered person.
5. The emotion detection method based on image acquisition according to claim 4, characterized by comprising, when obtaining a registered person identification model:
carrying out geometric normalization processing on the face images in the registered information base to obtain standardized face images, and storing each standardized face image in a column vector group { Z 1,Z2,Z3, … } according to rows to obtain a corresponding face matrix;
performing K-L transformation on the face matrix to obtain an average registered face, wherein the average registered face is specifically shown in a formula 1:
Wherein H is the number of standardized face image row pixels, H is the number of standardized face image row pixels, and Z i is the corresponding face image in the column vector group;
obtaining covariance matrixes of all face recognition training samples according to the average registered face, wherein the covariance matrixes are specifically shown in a formula 2:
Wherein T is matrix transposition, mu Z is the number of row pixel points of the average registered face;
The difference vector between each face matrix Z i and the average registered face μ Z is obtained, as shown in formula 3:
E=(E1,...,En) (3);
Wherein E i=ZiZ;
orthonormal is carried out on the difference vector to obtain a formula 4:
Projecting all face recognition training sample images into a feature subspace P to obtain a registrant recognition model formed by the coordinate coefficient of each face recognition training sample image in the feature subspace P, wherein the registrant recognition model is specifically shown as a formula 5:
yi=PT(ZiZ)=PTEi(i=1,2,...,h) (5);
wherein P is R j(h-1),ZiZ,Ei∈Rj,yi∈R(h-1);
each standardized face image is composed of h x j pixel points.
6. The image acquisition-based emotion detection method according to claim 5, characterized by comprising, when the face feature information is recognized using the registered person recognition model:
projecting the face image in the face feature information into a feature subspace P to obtain a corresponding coordinate coefficient, wherein the coordinate coefficient is specifically shown as a formula 6:
ytest=PT(ZtestZ) (6);
wherein Z test is a face image in the face characteristic information;
according to the coordinate coefficient, in the registered person identification model, determining whether a face image meeting the minimum distance exists by adopting a Euclidean distance solving objective function, and if so, considering the detected person as a registered person, wherein the method is specifically shown as a formula 7:
7. The emotion detection method based on image acquisition according to claim 1, characterized by comprising, when obtaining vibration images at different time points:
Analyzing the motion data of each point of the head of the testee in the head vestibular image by using a vibration image technology in a preset time period, and determining the amplitude parameter of the head vibration and the frequency parameter of the head vibration at each point;
and obtaining vibration images at different time points according to the amplitude parameter of the head vibration and the frequency parameter of the head vibration.
8. The image acquisition-based emotion detection method of claim 7, further comprising, when obtaining vibration images at different time points:
obtaining a head vibration amplitude image according to the amplitude parameter of the head vibration, wherein the head vibration amplitude image is specifically shown in a formula 8:
obtaining a head vibration frequency image according to the frequency parameter of the head vibration, wherein the head vibration frequency image is specifically shown in a formula 9:
Where x, y are coordinates of points, O x,y,i is signal amplitude of x, y points in the i-th frame, O x,y,(i+1) is signal amplitude of x, y points in the (i+1) -th frame, M is frame number for averaging amplitude components of the head vibration image, and G in is signal vibration frequency in the head vestibular image;
Wherein the vibration image includes a head vibration amplitude image and a head vibration frequency image.
9. The image acquisition-based emotion detection method according to claim 1, characterized by comprising, when obtaining a plurality of potential emotion indicators at different points in time:
And obtaining the negative emotion index, the positive emotion index, the physiological index, the response sensitivity index and the response degree index at different time points.
10. An emotion detection system based on image acquisition, comprising:
and the video acquisition module is used for: the head image acquisition device is used for acquiring the head image of the tested person when sitting at the end, acquired by the image acquisition device;
And an image analysis module: judging whether the feature extraction condition is met according to the position, background color and surface illuminance of the head of the tested person in the head image picture; if yes, extracting face characteristic information from the head image, identifying the tested person according to the face characteristic information, acquiring identity information of the tested person and binding the identity information; in a preset time period, carrying out multiple analysis processing on the head vestibular image to obtain basic data at a plurality of different time points, and respectively displaying the basic data at the plurality of different time points in a vibration image mode to obtain vibration images at the different time points;
emotion analysis module: the method comprises the steps of acquiring head vibration frequency and head vibration amplitude at different time points through vibration images at different time points, and analyzing the head vibration frequency and the head vibration amplitude to acquire a plurality of potential emotion indexes at different time points;
And a data integration module: generating a emotion characteristic normal model of the testee according to a plurality of potential emotion indexes at different time points, and reflecting the emotion of the testee through the emotion characteristic normal model; obtaining emotion characteristic normal models of a plurality of testees, comparing and analyzing the group emotion, and checking out negative emotion outliers;
And obtaining fluctuation amplitudes of the same index at different time points, and generating a emotion characteristic normal model of the tested person according to the fluctuation amplitudes and the multiple potential emotion indexes at different time points.
CN202310217661.9A 2023-03-08 2023-03-08 Emotion detection method and system based on image acquisition Active CN116311510B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310217661.9A CN116311510B (en) 2023-03-08 2023-03-08 Emotion detection method and system based on image acquisition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310217661.9A CN116311510B (en) 2023-03-08 2023-03-08 Emotion detection method and system based on image acquisition

Publications (2)

Publication Number Publication Date
CN116311510A CN116311510A (en) 2023-06-23
CN116311510B true CN116311510B (en) 2024-05-31

Family

ID=86782851

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310217661.9A Active CN116311510B (en) 2023-03-08 2023-03-08 Emotion detection method and system based on image acquisition

Country Status (1)

Country Link
CN (1) CN116311510B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2009140207A (en) * 2009-10-26 2011-05-10 Многопрофильное предприятие ООО "Элсис" (RU) METHOD OF OBTAINING INFORMATION ABOUT PSYCHOPHYSIOLOGICAL CONDITION OF A LIVING OBJECT
WO2012057646A1 (en) * 2010-10-28 2012-05-03 Общество С Ограниченной Ответственностью "Многопрофильное Предприятие "Элсис" Method for obtaining information about the psychophysiological state of a living being
CN110765838A (en) * 2019-09-02 2020-02-07 合肥工业大学 Real-time dynamic analysis method for facial feature region for emotional state monitoring
CN111104815A (en) * 2018-10-25 2020-05-05 北京入思技术有限公司 Psychological assessment method and device based on emotion energy perception
CN111631735A (en) * 2020-04-26 2020-09-08 华东师范大学 Abnormal emotion monitoring and early warning method based on video data vibration frequency
CN112150759A (en) * 2020-09-23 2020-12-29 北京安信智文科技有限公司 Real-time monitoring and early warning system and method based on video algorithm
CN112560770A (en) * 2020-12-25 2021-03-26 温州晶彩光电有限公司 Method and system for positioning intelligent colorful lamplight based on face recognition technology
KR20210045552A (en) * 2019-10-16 2021-04-27 임대열 Method and apparatus for obtaining emotion and physical state information of human using machine learning
CN113647950A (en) * 2021-08-23 2021-11-16 北京图安世纪科技股份有限公司 Psychological emotion detection method and system
CN114792553A (en) * 2021-12-28 2022-07-26 江苏博子岛智能产业技术研究院有限公司 Method and system for screening psychological health group of students
CN115736922A (en) * 2022-11-16 2023-03-07 北京数智天安科技有限公司 Emotion normalization monitoring system and method based on trusted environment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2009140207A (en) * 2009-10-26 2011-05-10 Многопрофильное предприятие ООО "Элсис" (RU) METHOD OF OBTAINING INFORMATION ABOUT PSYCHOPHYSIOLOGICAL CONDITION OF A LIVING OBJECT
WO2012057646A1 (en) * 2010-10-28 2012-05-03 Общество С Ограниченной Ответственностью "Многопрофильное Предприятие "Элсис" Method for obtaining information about the psychophysiological state of a living being
CN111104815A (en) * 2018-10-25 2020-05-05 北京入思技术有限公司 Psychological assessment method and device based on emotion energy perception
CN110765838A (en) * 2019-09-02 2020-02-07 合肥工业大学 Real-time dynamic analysis method for facial feature region for emotional state monitoring
KR20210045552A (en) * 2019-10-16 2021-04-27 임대열 Method and apparatus for obtaining emotion and physical state information of human using machine learning
CN111631735A (en) * 2020-04-26 2020-09-08 华东师范大学 Abnormal emotion monitoring and early warning method based on video data vibration frequency
CN112150759A (en) * 2020-09-23 2020-12-29 北京安信智文科技有限公司 Real-time monitoring and early warning system and method based on video algorithm
CN112560770A (en) * 2020-12-25 2021-03-26 温州晶彩光电有限公司 Method and system for positioning intelligent colorful lamplight based on face recognition technology
CN113647950A (en) * 2021-08-23 2021-11-16 北京图安世纪科技股份有限公司 Psychological emotion detection method and system
CN114792553A (en) * 2021-12-28 2022-07-26 江苏博子岛智能产业技术研究院有限公司 Method and system for screening psychological health group of students
CN115736922A (en) * 2022-11-16 2023-03-07 北京数智天安科技有限公司 Emotion normalization monitoring system and method based on trusted environment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Aswin K.M.HERS:Human emotion recognition system.《IEEE Xplore》.2017,全文. *
李博程.基于前庭神经反射的汽车驾驶员情感计算模型设计及应用.《上海汽车》.2022,(第11期),全文. *
顾红梅 ; .震动影像技术系统在学生课堂情绪状态评估中的应用.中国人民公安大学学报(自然科学版).2019,(第04期),全文. *

Also Published As

Publication number Publication date
CN116311510A (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN109069072B (en) Fraud detection system and method
US11497423B2 (en) System and method for detecting physiological state
JP4401079B2 (en) Subject behavior analysis
WO2016129193A1 (en) Health management assist device and health management assist method
CN107809951B (en) Psychophysiological detection (lie detection) method and apparatus for distortion using video-based detection of physiological signals
KR101722708B1 (en) Method and System for social relationship based on HRC by Micro movement of body
KR101689021B1 (en) System for determining psychological state using sensing device and method thereof
CN111803032B (en) Large-area observation method and system for suspected infection of Xinguan pneumonia
JP2017093760A (en) Device and method for measuring periodic variation interlocking with heart beat
KR101500888B1 (en) Method for obtaining information about the psychophysiological state of a living being
CN111887867A (en) Method and system for analyzing character formation based on expression recognition and psychological test
CN112957042A (en) Non-contact target emotion recognition method and system
CN116311510B (en) Emotion detection method and system based on image acquisition
CN111803031A (en) Non-contact type drug addict relapse monitoring method and system
Purtov et al. Remote photoplethysmography application to the analysis of time-frequency changes of human heart rate variability
Vasavi et al. Regression modelling for stress detection in humans by assessing most prominent thermal signature
Isaeva et al. Making decisions in intelligent video surveillance systems based on modeling the pupillary response of a person
Lamsal et al. Drowsiness and tiredness detection system by observing the visible properties of human eyes
CN106725364B (en) Controller fatigue detection method and system based on probability statistical method
KR102314153B1 (en) Method and Apparatus for Determining Synchronization of Social Relations based on personality
Thannoon et al. A survey on deceptive detection systems and technologies
Hajare et al. Analyzing the Biosignal to Make Fatigue Measurement as a Parameter for Mood Detection
WO2020044249A1 (en) Method and device for evaluating brain fatigue by using contactless imaging system
JP2024007407A (en) Information processing device
CN118098581A (en) Emotion state monitoring method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant