CN117153403A - Mental health evaluation method based on micro-expressions and physical indexes - Google Patents
Mental health evaluation method based on micro-expressions and physical indexes Download PDFInfo
- Publication number
- CN117153403A CN117153403A CN202311174763.3A CN202311174763A CN117153403A CN 117153403 A CN117153403 A CN 117153403A CN 202311174763 A CN202311174763 A CN 202311174763A CN 117153403 A CN117153403 A CN 117153403A
- Authority
- CN
- China
- Prior art keywords
- user
- data
- expression
- micro
- change
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000011156 evaluation Methods 0.000 title claims abstract description 25
- 230000004630 mental health Effects 0.000 title claims abstract description 19
- 230000008859 change Effects 0.000 claims abstract description 84
- 230000001815 facial effect Effects 0.000 claims abstract description 65
- 230000014509 gene expression Effects 0.000 claims abstract description 60
- 230000004424 eye movement Effects 0.000 claims abstract description 46
- 238000000034 method Methods 0.000 claims abstract description 43
- 230000008451 emotion Effects 0.000 claims abstract description 22
- 238000012937 correction Methods 0.000 claims abstract description 8
- 230000003287 optical effect Effects 0.000 claims abstract description 8
- 238000010586 diagram Methods 0.000 claims abstract description 4
- 210000003462 vein Anatomy 0.000 claims description 57
- 230000017531 blood circulation Effects 0.000 claims description 35
- 102000001554 Hemoglobins Human genes 0.000 claims description 28
- 108010054147 Hemoglobins Proteins 0.000 claims description 28
- 239000008280 blood Substances 0.000 claims description 28
- 210000004369 blood Anatomy 0.000 claims description 28
- 230000009323 psychological health Effects 0.000 claims description 25
- 230000036772 blood pressure Effects 0.000 claims description 23
- 210000001367 artery Anatomy 0.000 claims description 21
- 210000004204 blood vessel Anatomy 0.000 claims description 19
- 238000012544 monitoring process Methods 0.000 claims description 19
- 239000000306 component Substances 0.000 claims description 15
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 claims description 13
- 229910052760 oxygen Inorganic materials 0.000 claims description 13
- 239000001301 oxygen Substances 0.000 claims description 13
- 238000001514 detection method Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 10
- 206010028813 Nausea Diseases 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 7
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 230000002526 effect on cardiovascular system Effects 0.000 claims description 7
- 230000000694 effects Effects 0.000 claims description 7
- 230000005611 electricity Effects 0.000 claims description 7
- 238000003384 imaging method Methods 0.000 claims description 7
- 238000005259 measurement Methods 0.000 claims description 7
- 230000008693 nausea Effects 0.000 claims description 7
- 210000005036 nerve Anatomy 0.000 claims description 7
- 230000036391 respiratory frequency Effects 0.000 claims description 7
- 230000002889 sympathetic effect Effects 0.000 claims description 7
- 230000002457 bidirectional effect Effects 0.000 claims description 6
- 239000012503 blood component Substances 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000012800 visualization Methods 0.000 abstract description 2
- 210000001508 eye Anatomy 0.000 description 13
- 210000005252 bulbus oculi Anatomy 0.000 description 6
- 239000013598 vector Substances 0.000 description 6
- 230000009471 action Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 208000003443 Unconsciousness Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000008921 facial expression Effects 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/30—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/113—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/0205—Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/021—Measuring pressure in heart or blood vessels
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/024—Detecting, measuring or recording pulse rate or heart rate
- A61B5/02438—Detecting, measuring or recording pulse rate or heart rate with portable devices, e.g. worn by the patient
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/02—Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
- A61B5/026—Measuring blood flow
- A61B5/0261—Measuring blood flow using optical means, e.g. infrared light
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/08—Detecting, measuring or recording devices for evaluating the respiratory organs
- A61B5/0816—Measuring devices for examining respiratory frequency
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/145—Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
- A61B5/1455—Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
- A61B5/14551—Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/163—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state by tracking eye movement, gaze, or pupil change
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/16—Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
- A61B5/165—Evaluating the state of mind, e.g. depression, anxiety
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/24—Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
- A61B5/316—Modalities, i.e. specific diagnostic methods
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4803—Speech analysis specially adapted for diagnostic purposes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6802—Sensor mounted on worn items
- A61B5/681—Wristwatch-type devices
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
- G06N3/0442—Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/62—Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
- G06V10/765—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H10/00—ICT specially adapted for the handling or processing of patient-related medical or healthcare data
- G16H10/60—ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/70—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/70—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A90/00—Technologies having an indirect contribution to adaptation to climate change
- Y02A90/10—Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Surgery (AREA)
- Heart & Thoracic Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Veterinary Medicine (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Cardiology (AREA)
- Physiology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Psychiatry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Databases & Information Systems (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Child & Adolescent Psychology (AREA)
- Hospice & Palliative Care (AREA)
- Developmental Disabilities (AREA)
- General Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Psychology (AREA)
- Social Psychology (AREA)
- Pulmonology (AREA)
- Educational Technology (AREA)
Abstract
The application discloses a mental health evaluation method based on micro-expressions and body indexes, which comprises the steps of firstly, acquiring micro-expression data of a user, shooting the expression of the user, taking 20-40 points in an image, establishing a model according to the points, establishing a CNN model by using a face key point method, establishing an ELRCN model by using an optical flow image characteristic method and the like. Meanwhile, a real-time rich open-source expression library is established, and overall data correction is carried out according to the actual expression of the individual. According to the application, the micro expression data, the facial color data, the eye movement data and the physiological data of the user are integrated, the psychological assessment is assigned according to different weights, the pluralism, the accuracy and the comprehensiveness of the psychological assessment are increased, and the comprehensive psychological dynamic change diagram is drawn according to the comprehensive score, so that the emotion fluctuation visualization is realized.
Description
Technical Field
The application relates to the technical field of micro-expression recognition, in particular to a psychological health evaluation method based on micro-expressions and body indexes.
Background
The micro-expression is a special facial expression, and compared with the common expression, the micro-expression has the characteristics of short duration, usually only 1/25 s-1/3 s, low action intensity, difficulty in detection, difficulty in disguising or camouflage, general requirement for analysis of the micro-expression in a video, and general expression in an image can be analyzed, and the micro-expression spontaneously occurs in the unconscious state, is difficult to disguise or camouflage and is usually directly related to real emotion, so that the micro-expression is reliable in emotion analysis and has wide application prospect; on the other hand, because the artificial micro-expression recognition is difficult, the training difficulty is high and the success rate is low, the micro-expression automatic recognition needs to be performed by a computer.
The existing identification schemes have the following defects:
1. patent document US07623687B2 discloses three-dimensional face recognition, which discloses that "an apparatus for obtaining three-dimensional data of a geometry for matching and in particular for face matching comprises a three-dimensional scanner for obtaining three-dimensional topographical data of a body, a triangulation instrument for receiving or forming a triangular manifold from said data, a geodetic converter for converting the triangular manifold into a series of geodetic distances between manifold points, and a multi-dimensional scaler for forming a low-dimensional euclidean representation of the series of geodetic distances to produce a representation of a curved invariant geometry. In one variation, the matching is performed by matching the original feature values in the representation as coordinates in a feature space. Tilting of the same face or different manifestations tend to form clusters in the feature space that allow matching. The device preferably uses a fast travel method of the triangle field to obtain the geodesic distance ";
2. patent document US09208375B2 discloses a face recognition mechanism, which discloses that "the present disclosure relates to a face recognition method, apparatus and computer-readable recording medium for performing the method. According to some aspects of the present disclosure, a face recognition method includes: (a) A key point setting step of setting a key point at a specified position on the input face image; (b) A key point descriptor extraction step of extracting each descriptor of each key point; and (c) a matching step of determining whether the input face image matches a previously stored face image using descriptors including key points within a designated area of each descriptor for each first key point obtained from the input face image, and a previously determined second key point stored face image corresponding to a first key point' obtained from the input face image;
3. patent document US08224042B2 discloses automatic face recognition, which discloses "automatic face recognition". In a first exemplary embodiment, a method for automatic facial recognition includes several actions. First, a face pattern and two eye patterns are detected. Then, the face pattern is normalized. Next, the normalized face pattern is converted into a normalized face feature vector of Gabor feature representation. Then, a differential image vector is calculated. Next, the differential image vector is projected to a lower-dimensional intra-subject subspace extracted from a pre-collected training face database. A square function is then applied to each component of the projection. Next, a weighted sum of the squared projections is calculated. The previous four actions are then repeated for each normalized gallery image feature vector. Finally, detecting the face pattern in the digital image as belonging to the gallery image having the highest calculated weighted sum, wherein the highest calculated weighted sum is above a predetermined threshold ";
4. patent document US07430315B2 discloses a face recognition system which discloses "face detection systems and methods attempt to classify test images before performing all kernel evaluations. Many sub-images are not faces and should be relatively easy to identify. Thus, the SVM classifier attempts to discard non-facial images using as little kernel evaluation as possible using cascaded SVM classification. In the first stage, scores are calculated for the first two support vectors and compared to a threshold. If the score is below the threshold, the sub-image is classified as not being a face. If the score is above the threshold, the cascading SVM classification function continues to apply more complex decision rules, doubling the number of kernel evaluations each time, classifying the image as non-planar (and thus terminating the process), once the test image fails to meet one of the decision rules. Finally, if the sub-image meets all intermediary decision rules and has now arrived at the point where all support vectors have to be considered, the original decision function is applied. Satisfying this final rule and all intervening rules is the only way to test the image for positive (face) classification ";
in summary, the existing micro-expression recognition technology is mostly inaccurate, the recognition scheme of the key points of the face is not accurate enough, in addition, the micro-expression on the face is fast-released, a high-definition camera is required to shoot, evaluation and judgment are required to be carried out frame by frame, and the micro-expression is recognized only for monitoring the emotion change of the user, and in daily life, the emotion change often means the health change of the body and the mind, so that the value of the micro-expression is not only for monitoring the emotion change, but also indirectly monitoring the mental health condition of the user.
Disclosure of Invention
The application aims to provide a psychological health evaluation method based on micro-expressions and physical indexes so as to solve the problems in the background technology.
In order to achieve the above purpose, the present application provides the following technical solutions: a mental health evaluation method based on micro-expressions and body indexes comprises the steps of firstly, obtaining user micro-expression data, shooting user expressions, taking 20-40 points in an image, establishing a model according to the points, establishing a CNN model by using a face key point method, establishing an ELRCN model by using an optical flow image characteristic method and the like. Meanwhile, a real-time rich open-source expression library is established, and overall data correction is carried out according to the actual expression of the individual;
step two, obtaining facial color data of a user, monitoring facial blood flow velocity and component change, and establishing a relation chart of color and facial blood flow velocity and component change according to the facial color data;
step three, obtaining user eye movement data, and obtaining the user eye movement data by using a method for carrying out non-infrared tracking eye fixation based on RGBD (red, green and blue) by adopting an off-line built personalized three-dimensional face model and a real-time fixation estimation method of a depth convolution neural network based on eye appearance;
step four, acquiring physiological data of a user;
fifthly, comprehensively associating the data;
step six, dynamically evaluating the psychological health condition of the user, comprehensively analyzing to obtain psychological health indexes of the user, drawing a comprehensive psychological dynamic change chart according to the comprehensive score (Z), and judging the psychological health degree.
Preferably, in the first step, 10-30 key frames are extracted from the shot video, the variation condition of the expression of the user is obtained through a plurality of key frames, the face data points are marked in the plurality of key frames, and the change trend graph of the face points of a certain expression of the user is obtained according to the variation of the face points.
Preferably, the video clips are photographed with a high-speed camera of 200-400FPS, the facial resolution of the video clips can reach about 280 x 340-460 x 750 pixels, the CASME II dataset marks the micro-surfacing as 5 categories, namely Happiness (Happiness), nausea (distust), surprise (Surprise), depression (depression), others (other), besides, the starting point (Onset), peak point (Apex) and ending point (Offset) of the micro-expression activity need to be marked in the dataset.
Preferably, in the first step, when the expression of the user is collected, a short video of 10-15S is played for the user, and the video content adopts life, entertainment, sports, music, delicacies, fashion and animation, and the continuous collection is carried out through the camera according to the expression of the user of the video content.
Preferably, in the second step, the facial color is related to the concentration of hemoglobin, and the change of blood flow and blood composition changes the concentration of hemoglobin, thereby causing the change of skin color, and when anger occurs, the AU4+AU5+AU7+AU24 moves, the blood flow speed in the blood vessel becomes fast, the flow rate becomes large, the concentration of hemoglobin becomes large, the face becomes red, the veins of the face including the superior trochanteric artery, the superior orbital artery, the sentinel vein, the superior blepharon vein, the lateral nasal artery and the inferior labial vein, and the blood flow rate of 3-5 veins and the branch vessels of the veins thereof is selected to be monitored, and the hemoglobin content in the blood of the blood vessels is monitored.
Preferably, when eye movement data are collected, the eye movement instrument with 30-300Hz is used for continuously recording the eye ball rotation track, each data point is identified as a time tag or a coordinate form of (x, y) and is sent to a database of an analysis software program running on an addition computer connected with the eye movement, and the original data are filtered by using three gaze point filtering algorithms (ClearView, tobii, I-VT) to obtain the data serving as gaze points.
Preferably, the second step further includes determining the context, and the voice recognition system uses a bi-directional long-short-term memory network (LSTM, longShortTermMemory), which can model long-term correlation of the voice, so as to improve the recognition accuracy, and determine the emotion of the user according to the recognized context and the language of the user.
Preferably, in the fourth step, the multifunctional wristband is tested by matching with the body vital signs, a physiological signal detection method based on remote measurement photoelectric pulse volume imaging is used for obtaining the vital signs of heart rate, blood pressure and blood oxygen concentration changes of the user, the association relation between the vital signs and emotion changes is established, and in addition, the model is corrected by assisting in monitoring of the skin electricity level, the respiratory frequency and cardiovascular sympathetic nerve activation indexes.
Preferably, in the fourth step, the multifunctional wristband acquires user data once every 10-30S, and detects blood pressure of the user with a blood detector once every 20-40S.
Preferably, in step five, the user's microexpressions (W * ) Face color (M) * ) Eye movement (Y) * ) Physiological index (S) * ) The four aspects of data are combined, and each index is weighted according to different weights (a 1 ,a 2 ,a 3 ,a 4 ) Performing scoring calculation to obtain a comprehensive score (Z), wherein Z=a 1 W * +a 2 M * +a 3 Y * +a 4 S * 。
Compared with the prior art, the application has the beneficial effects that:
1. according to the application, the micro expression data, the facial color data, the eye movement data and the physiological data of the user are integrated, and the psychological assessment is assigned according to different weights, so that the pluralism, the accuracy and the comprehensiveness of the psychological assessment are increased;
2. according to the comprehensive score, a comprehensive psychological dynamic change chart is drawn, and the emotion fluctuation visualization is realized;
3. the application can improve the recognition accuracy of the micro-expression through the recognition and judgment of a plurality of key frames;
4. the application comprehensively judges the psychological health condition of the user through the identification of the micro-expressions and the detection of the vital signs of the human body, thereby being convenient for the user to detect the body state of the user.
Drawings
FIG. 1 is a diagram of the composite score change of the present application;
FIG. 2 is a graph showing the overall score change according to the present application;
fig. 3 is a diagram showing facial recognition points according to different emotion changes.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Example 1: referring to fig. 1, 2 and 3, the mental health evaluation method based on micro-expressions and physical indexes includes the steps of firstly, obtaining micro-expression data of a user, shooting the expression of the user, taking 20 points in an image, establishing a model according to the points, establishing a CNN model by using a face key point method, establishing an ELRCN model by using an optical flow image characteristic method and the like. Meanwhile, a real-time rich open-source expression library is established, and overall data correction is carried out according to the actual expression of the individual;
step two, obtaining facial color data of a user, monitoring facial blood flow velocity and component change, and establishing a relation chart of color and facial blood flow velocity and component change according to the facial color data;
step three, obtaining user eye movement data, and obtaining the user eye movement data by using a method for carrying out non-infrared tracking eye fixation based on RGBD (red, green and blue) by adopting an off-line built personalized three-dimensional face model and a real-time fixation estimation method of a depth convolution neural network based on eye appearance;
step four, acquiring physiological data of a user;
fifthly, comprehensively associating the data;
step six, dynamically evaluating the psychological health condition of the user, comprehensively analyzing to obtain psychological health indexes of the user, drawing a comprehensive psychological dynamic change chart according to the comprehensive score (Z), and judging the psychological health degree.
In the first step, 10 key frames are extracted from the shot video, the change condition of the expression of the user is obtained through the key frames, the facial data points are marked in the key frames, and a facial point change trend chart of a certain expression of the user is obtained according to the change of the facial points.
Shooting with a high-speed camera of 200-400FPS, the facial resolution of the video clip can reach about 280 x 340 pixels, the CASME II dataset marks the micro-episodes into 5 categories, namely Happiness (Happiness), nausea (distest), surprise (surrise), depression (reduction), other (other), besides, the starting point (Onset), peak point (Apex) and ending point (Offset) of the micro-expression activity need to be marked in the dataset.
And in the first step, when the expression of the user is collected, playing short video of 10S for the user, wherein the video content adopts life, entertainment, sports, music, delicacies, fashion and animation, and the continuous collection is carried out through a camera according to the expression of the user of the video content.
In the second step, the color of the facial hole is related to the concentration of hemoglobin, the concentration of hemoglobin is changed due to the change of blood flow and blood components, so that the change of skin color is caused, the blood flow speed in blood vessels is increased, the flow rate is increased, the concentration of hemoglobin is increased, the face can show red color, the veins of the face comprise the superior trochanteric artery and vein, the superior orbital artery and vein, the sentinel vein, the superior palpebral vein, the lateral nasal artery and vein, the blood flow rate of 3-5 veins and branch blood vessels of the veins is selected to be monitored, and the hemoglobin content in blood of blood vessels is monitored.
When eye movement data are collected, an eye movement instrument of 120Hz is used for continuously recording the eye ball rotation track, each data point is identified as a time tag or (x, y) coordinate form and is sent to a database of an analysis software program running on an adder connected with the eye movement, and three gaze point filtering algorithms (ClearView, tobii, I-VT) are used for filtering the original data through the algorithm to obtain the data serving as a gaze point.
And in the second step, the judgment of the context is further included, a voice recognition system is used, the voice recognition system adopts a bidirectional long-short-time memory network (LSTM, longShortTermMemory), and the network can model long-time correlation of the voice, so that the recognition accuracy is improved, and the emotion of the user is judged according to the recognized context and the language of the user.
In the fourth step, the physiological signal detection based on remote measurement photoelectric pulse volume imaging is used for non-invasive method to obtain the heart rate, blood pressure and blood oxygen concentration change of the vital sign of the user, and the association relation between the heart rate, blood pressure and blood oxygen concentration change and emotion change is established, and in addition, the model is corrected by assisting in monitoring the skin electricity level, respiratory frequency and cardiovascular sympathetic nerve activation index.
In step four, the multifunctional wristband acquires user data once every 10S, detects the blood pressure of the user using the blood detector once every 20S, and in step five, the user' S microexpressions (W * ) Face color (M) * ) Eye movement (Y) * ) Physiological index (S) * ) The four aspects of data are combined, and each index is weighted according to different weights (a 1 ,a 2 ,a 3 ,a 4 ) Assigning a scoreCalculating to obtain a comprehensive score (Z), wherein Z=a 1 W * +a 2 M * +a 3 Y * +a 4 S * 。
Example 2: referring to fig. 1, 2 and 3, the mental health evaluation method based on micro-expressions and physical indexes includes the steps of firstly, obtaining micro-expression data of a user, shooting the expression of the user, taking 25 points in an image, establishing a model according to the points, establishing a CNN model by using a face key point method, establishing an ELRCN model by using an optical flow image characteristic method and the like. Meanwhile, a real-time rich open-source expression library is established, and overall data correction is carried out according to the actual expression of the individual;
step two, obtaining facial color data of a user, monitoring facial blood flow velocity and component change, and establishing a relation chart of color and facial blood flow velocity and component change according to the facial color data;
step three, obtaining user eye movement data, and obtaining the user eye movement data by using a method for carrying out non-infrared tracking eye fixation based on RGBD (red, green and blue) by adopting an off-line built personalized three-dimensional face model and a real-time fixation estimation method of a depth convolution neural network based on eye appearance;
step four, acquiring physiological data of a user;
fifthly, comprehensively associating the data;
step six, dynamically evaluating the psychological health condition of the user, comprehensively analyzing to obtain psychological health indexes of the user, drawing a comprehensive psychological dynamic change chart according to the comprehensive score (Z), and judging the psychological health degree.
In the first step, 25 key frames are extracted from the shot video, the change condition of the expression of the user is obtained through the key frames, the facial data points are marked in the key frames, and a facial point change trend chart of a certain expression of the user is obtained according to the change of the facial points.
The facial resolution of the video clip can reach about 280 x 340-460 x 750 pixels when the video clip is photographed by a 144PS high-speed camera, and the CASME II dataset marks the micro-table as 5 categories, namely Happiness (Happiness), nausea (Surprise), depression (depression), and other (other), and besides, the starting point (Onset), peak point (Apex), and ending point (Offset) of the micro-expression activity need to be marked in the dataset.
And in the first step, when the expression of the user is collected, playing a 12S short video for the user, wherein the video content adopts life, entertainment, sports, music, delicacies, fashion and animation, and the continuous collection is carried out through a camera according to the expression of the user of the video content.
In the second step, the color of the facial hole is related to the concentration of hemoglobin, and the change of blood flow and blood components can change the concentration of hemoglobin, so that the change of skin color is caused, the AU4+AU5+AU7+AU24 moves when anger happens, the blood flow speed in blood vessels becomes fast, the flow rate becomes large, the concentration of hemoglobin becomes large, the face can show red color, the veins of the face comprise the superior trochanteric artery and vein, the superior orbital artery and vein, the sentinel vein, the superior palpebral vein, the lateral nasal artery and vein and the inferior labial vein, and the blood flow rate of 5 veins and branch blood vessels of the veins is selected for monitoring, and meanwhile, the hemoglobin content in blood of blood vessels is monitored.
When eye movement data are collected, an eye movement instrument with the frequency of 30-300Hz is used for continuously recording the eye ball rotation track, each data point is identified as a time label or (x, y) coordinate form and is sent to a database of an analysis software program running on an adder connected with the eye movement, and three gaze point filtering algorithms (ClearView, tobii, I-VT) are used for filtering the original data through the algorithm to obtain the data serving as a gaze point.
And in the second step, the judgment of the context is further included, a voice recognition system is used, the voice recognition system adopts a bidirectional long-short-time memory network (LSTM, longShortTermMemory), and the network can model long-time correlation of the voice, so that the recognition accuracy is improved, and the emotion of the user is judged according to the recognized context and the language of the user.
In the fourth step, the physiological signal detection based on remote measurement photoelectric pulse volume imaging is used for non-invasive method to obtain the heart rate, blood pressure and blood oxygen concentration change of the vital sign of the user, and the association relation between the heart rate, blood pressure and blood oxygen concentration change and emotion change is established, and in addition, the model is corrected by assisting in monitoring the skin electricity level, respiratory frequency and cardiovascular sympathetic nerve activation index.
In step four, the multifunctional wristband acquires user data once every 20S, detects the blood pressure of the user using the blood detector once every 23S, and in step five, the user' S microexpressions (W * ) Face color (M) * ) Eye movement (Y) * ) Physiological index (S) * ) The four aspects of data are combined, and each index is weighted according to different weights (a 1 ,a 2 ,a 3 ,a 4 ) Performing scoring calculation to obtain a comprehensive score (Z), wherein Z=a 1 W * +a 2 M * +a 3 Y * +a 4 S * 。
Example 3: referring to fig. 1, 2 and 3, the mental health evaluation method based on micro-expressions and physical indexes includes the steps of firstly, obtaining micro-expression data of a user, shooting the expression of the user, taking points of 35 in an image, establishing a model according to the points, establishing a CNN model by using a face key point method, establishing an ELRCN model by using an optical flow image characteristic method and the like. Meanwhile, a real-time rich open-source expression library is established, and overall data correction is carried out according to the actual expression of the individual;
step two, obtaining facial color data of a user, monitoring facial blood flow velocity and component change, and establishing a relation chart of color and facial blood flow velocity and component change according to the facial color data;
step three, obtaining user eye movement data, and obtaining the user eye movement data by using a method for carrying out non-infrared tracking eye fixation based on RGBD (red, green and blue) by adopting an off-line built personalized three-dimensional face model and a real-time fixation estimation method of a depth convolution neural network based on eye appearance;
step four, acquiring physiological data of a user;
fifthly, comprehensively associating the data;
step six, dynamically evaluating the psychological health condition of the user, comprehensively analyzing to obtain psychological health indexes of the user, drawing a comprehensive psychological dynamic change chart according to the comprehensive score (Z), and judging the psychological health degree.
In the first step, 25 key frames are extracted from the shot video, the change condition of the expression of the user is obtained through the key frames, the facial data points are marked in the key frames, and a facial point change trend chart of a certain expression of the user is obtained according to the change of the facial points.
The facial resolution of the video clip can reach about 460 x 750 pixels when photographed by a high-speed video camera with 350FPS, and the CASME II dataset marks the micro-table as 5 categories, namely Happiness (Happiness), nausea (distest), surprise (surrise), depression (Repression), and Others (other), besides, the starting point (Onset), peak point (Apex), and ending point (Offset) of the micro-expression activity need to be marked in the dataset.
And in the first step, when the expression of the user is collected, playing a short video of 15S for the user, wherein the video content adopts life, entertainment, sports, music, delicacies, fashion and animation, and the continuous collection is carried out through a camera according to the expression of the user of the video content.
In the second step, the color of the facial hole is related to the concentration of hemoglobin, the concentration of hemoglobin is changed due to the change of blood flow and blood components, so that the change of skin color is caused, the blood flow speed in blood vessels is increased, the flow rate is increased, the concentration of hemoglobin is increased, the face can show red color, the veins of the face comprise the superior trochanteric artery and vein, the superior orbital artery and vein, the sentinel vein, the superior palpebral vein, the lateral nasal artery and vein, the blood flow rate of 3-5 veins and branch blood vessels of the veins is selected to be monitored, and the hemoglobin content in blood of blood vessels is monitored.
When eye movement data are collected, the eye movement instrument at 165Hz is used for continuously recording the eye ball rotation track, each data point is identified as a time tag or (x, y) coordinate form and is sent to a database of an analysis software program running on an adder connected with the eye movement, and the original data are filtered through an algorithm by using three gaze point filtering algorithms (ClearView, tobii, I-VT) to obtain the data serving as gaze points.
And in the second step, the judgment of the context is further included, a voice recognition system is used, the voice recognition system adopts a bidirectional long-short-time memory network (LSTM, longShortTermMemory), and the network can model long-time correlation of the voice, so that the recognition accuracy is improved, and the emotion of the user is judged according to the recognized context and the language of the user.
In the fourth step, the physiological signal detection based on remote measurement photoelectric pulse volume imaging is used for non-invasive method to obtain the heart rate, blood pressure and blood oxygen concentration change of the vital sign of the user, and the association relation between the heart rate, blood pressure and blood oxygen concentration change and emotion change is established, and in addition, the model is corrected by assisting in monitoring the skin electricity level, respiratory frequency and cardiovascular sympathetic nerve activation index.
In step four, the multifunctional wristband acquires user data once every 10-30S, detects blood pressure of the user using the blood detector once every 20-40S, and in step five, the user' S microexpressions (W * ) Face color (M) * ) Eye movement (Y) * ) Physiological index (S) * ) The four aspects of data are combined, and each index is weighted according to different weights (a 1 ,a 2 ,a 3 ,a 4 ) Performing scoring calculation to obtain a comprehensive score (Z), wherein Z=a 1 W * +a 2 M * +a 3 Y * +a 4 S * 。
Example 4: referring to fig. 1, 2 and 3, the mental health evaluation method based on micro-expressions and physical indexes includes the steps of firstly, obtaining micro-expression data of a user, shooting the expression of the user, taking points of 35 in an image, establishing a model according to the points, establishing a CNN model by using a face key point method, establishing an ELRCN model by using an optical flow image characteristic method and the like. Meanwhile, a real-time rich open-source expression library is established, and overall data correction is carried out according to the actual expression of the individual;
step two, obtaining facial color data of a user, monitoring facial blood flow velocity and component change, and establishing a relation chart of color and facial blood flow velocity and component change according to the facial color data;
step three, obtaining user eye movement data, and obtaining the user eye movement data by using a method for carrying out non-infrared tracking eye fixation based on RGBD (red, green and blue) by adopting an off-line built personalized three-dimensional face model and a real-time fixation estimation method of a depth convolution neural network based on eye appearance;
step four, acquiring physiological data of a user;
fifthly, comprehensively associating the data;
step six, dynamically evaluating the psychological health condition of the user, comprehensively analyzing to obtain psychological health indexes of the user, drawing a comprehensive psychological dynamic change chart according to the comprehensive score (Z), and judging the psychological health degree.
In the first step, 25 key frames are extracted from the shot video, the change condition of the expression of the user is obtained through the key frames, the facial data points are marked in the key frames, and a facial point change trend chart of a certain expression of the user is obtained according to the change of the facial points.
Shooting with a high-speed camera of 200-400FPS, the facial resolution of the video clip can reach about 460 x 750 pixels, and the CASME II dataset marks the micro-episodes into 5 categories, namely Happiness (Happiness), nausea (distest), surprise (surrise), depression (reduction), and Others (other), besides, the starting point (Onset), peak point (Apex), and ending point (Offset) of the micro-expression activity need to be marked in the dataset.
And in the first step, when the expression of the user is collected, playing a short video of 15S for the user, wherein the video content adopts life, entertainment, sports, music, delicacies, fashion and animation, and the continuous collection is carried out through a camera according to the expression of the user of the video content.
In the second step, the color of the facial hole is related to the concentration of hemoglobin, and the change of blood flow and blood components can change the concentration of hemoglobin, so that the change of skin color is caused, the AU4+AU5+AU7+AU24 moves when anger happens, the blood flow speed in blood vessels becomes fast, the flow rate becomes large, the concentration of hemoglobin becomes large, the face can show red color, the veins of the face comprise the superior trochanteric artery and vein, the superior orbital artery and vein, the sentinel vein, the superior palpebral vein, the lateral nasal artery and vein and the inferior labial vein, and the blood flow rate of 3 veins and branch blood vessels of the veins is selected to be monitored, and meanwhile, the hemoglobin content in blood of blood vessels is monitored.
When eye movement data are collected, an eye movement instrument of 244Hz is used for continuously recording the eye ball rotation track, each data point is identified as a time label or (x, y) coordinate form and is sent to a database of an analysis software program running on an adder connected with the eye movement, and the original data are filtered through an algorithm by using three gaze point filtering algorithms (ClearView, tobii, I-VT) to obtain the data serving as gaze points.
And in the second step, the judgment of the context is further included, a voice recognition system is used, the voice recognition system adopts a bidirectional long-short-time memory network (LSTM, longShortTermMemory), and the network can model long-time correlation of the voice, so that the recognition accuracy is improved, and the emotion of the user is judged according to the recognized context and the language of the user.
In the fourth step, the physiological signal detection based on remote measurement photoelectric pulse volume imaging is used for non-invasive method to obtain the heart rate, blood pressure and blood oxygen concentration change of the vital sign of the user, and the association relation between the heart rate, blood pressure and blood oxygen concentration change and emotion change is established, and in addition, the model is corrected by assisting in monitoring the skin electricity level, respiratory frequency and cardiovascular sympathetic nerve activation index.
In step four, the multifunctional wristband acquires user data once every 27S, detects blood pressure of the user using the blood detector once every 25S, and in step five, the user' S microexpressions (W * ) Face color (M) * ) Eye movement (Y) * ) Physiological index (S) * ) The four aspects of data are combined, and each index is weighted according to different weights (a 1 ,a 2 ,a 3 ,a 4 ) Performing scoring calculation to obtain a comprehensive score (Z), wherein Z=a 1 W * +a 2 M * +a 3 Y * +a 4 S * 。
Example 5: referring to fig. 1, 2 and 3, the mental health evaluation method based on micro-expressions and physical indexes includes the steps of firstly, obtaining micro-expression data of a user, shooting the expression of the user, taking 36 point positions in an image, establishing a model according to the point positions, establishing a CNN model by using a face key point method, establishing an ELRCN model by using an optical flow image characteristic method and the like. Meanwhile, a real-time rich open-source expression library is established, and overall data correction is carried out according to the actual expression of the individual;
step two, obtaining facial color data of a user, monitoring facial blood flow velocity and component change, and establishing a relation chart of color and facial blood flow velocity and component change according to the facial color data;
step three, obtaining user eye movement data, and obtaining the user eye movement data by using a method for carrying out non-infrared tracking eye fixation based on RGBD (red, green and blue) by adopting an off-line built personalized three-dimensional face model and a real-time fixation estimation method of a depth convolution neural network based on eye appearance;
step four, acquiring physiological data of a user;
fifthly, comprehensively associating the data;
step six, dynamically evaluating the psychological health condition of the user, comprehensively analyzing to obtain psychological health indexes of the user, drawing a comprehensive psychological dynamic change chart according to the comprehensive score (Z), and judging the psychological health degree.
In the first step, 16 key frames are extracted from the shot video, the change condition of the expression of the user is obtained through the key frames, the facial data points are marked in the key frames, and a facial point change trend chart of a certain expression of the user is obtained according to the change of the facial points.
Shooting with a high-speed camera of 200-400FPS, the facial resolution of the video clip can reach about 280 x 340 pixels, the CASME II dataset marks the micro-episodes into 5 categories, namely Happiness (Happiness), nausea (distest), surprise (surrise), depression (reduction), other (other), besides, the starting point (Onset), peak point (Apex) and ending point (Offset) of the micro-expression activity need to be marked in the dataset.
And in the first step, when the expression of the user is collected, playing a 14S short video for the user, wherein the video content adopts life, entertainment, sports, music, delicacies, fashion and animation, and the continuous collection is carried out through a camera according to the expression of the user of the video content.
In the second step, the color of the facial hole is related to the concentration of hemoglobin, and the change of blood flow and blood components can change the concentration of hemoglobin, so that the change of skin color is caused, the AU4+AU5+AU7+AU24 moves when anger happens, the blood flow speed in blood vessels becomes fast, the flow rate becomes large, the concentration of hemoglobin becomes large, the face can show red color, the veins of the face comprise the superior trochanteric artery and vein, the superior orbital artery and vein, the sentinel vein, the superior palpebral vein, the lateral nasal artery and vein and the inferior labial vein, and the blood flow rate of the 4 veins and the branch vessels of the veins is selected for monitoring, and meanwhile, the hemoglobin content in blood of blood vessels is monitored.
When eye movement data are collected, the eye movement instrument with 280Hz is used for continuously recording the eye ball rotation track, each data point is identified as a time label or (x, y) coordinate form and is sent to a database of an analysis software program running on an adder connected with the eye movement, and the original data are filtered through an algorithm by using three gaze point filtering algorithms (ClearView, tobii, I-VT) to obtain the data serving as gaze points.
And in the second step, the judgment of the context is further included, a voice recognition system is used, the voice recognition system adopts a bidirectional long-short-time memory network (LSTM, longShortTermMemory), and the network can model long-time correlation of the voice, so that the recognition accuracy is improved, and the emotion of the user is judged according to the recognized context and the language of the user.
In the fourth step, the physiological signal detection based on remote measurement photoelectric pulse volume imaging is used for non-invasive method to obtain the heart rate, blood pressure and blood oxygen concentration change of the vital sign of the user, and the association relation between the heart rate, blood pressure and blood oxygen concentration change and emotion change is established, and in addition, the model is corrected by assisting in monitoring the skin electricity level, respiratory frequency and cardiovascular sympathetic nerve activation index.
In step four, the multifunctional wristband acquires user data once every 27S, detects blood pressure of the user with a blood detector once every 20-40S, and in step five, the user' S microexpressions (W * ) Face color (M) * ) Eye movement (Y) * ) Physiological index (S) * ) The four aspects of data are combined, and each index is weighted according to different weights (a 1 ,a 2 ,a 3 ,a 4 ) Performing scoring calculation to obtain a comprehensive score (Z), wherein Z=a 1 W * +a 2 M * +a 3 Y * +a 4 S * 。
It will be evident to those skilled in the art that the application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Claims (10)
1. The psychological health evaluation method based on the micro-expressions and the physical indexes is characterized by comprising the following steps of: firstly, acquiring micro-expression data of a user, shooting the expression of the user, taking 20-40 point positions in an image, establishing a model according to the point positions, establishing a CNN model by using a face key point method, establishing an ELRCN model by using an optical flow image characteristic method and the like, simultaneously establishing a real-time rich open-source expression library, and carrying out integral data correction according to the actual expression of a person;
step two, obtaining facial color data of a user, monitoring facial blood flow velocity and component change, and establishing a relation chart of color and facial blood flow velocity and component change according to the facial color data;
step three, obtaining user eye movement data, and obtaining the user eye movement data by using a method for carrying out non-infrared tracking eye fixation based on RGBD (red, green and blue) by adopting an off-line built personalized three-dimensional face model and a real-time fixation estimation method of a depth convolution neural network based on eye appearance;
step four, acquiring physiological data of a user;
fifthly, comprehensively associating the data;
step six, dynamically evaluating the psychological health condition of the user, comprehensively analyzing to obtain psychological health indexes of the user, drawing a comprehensive psychological dynamic change chart according to the comprehensive score (Z), and judging the psychological health degree.
2. The mental health evaluation method based on micro-expressions and physical indexes according to claim 1, wherein: in the first step, 10-30 key frames are extracted from the shot video, the change condition of the expression of the user is obtained through the key frames, the facial data points are marked in the key frames, and a facial point change trend chart of a certain expression of the user is obtained according to the change of the facial points.
3. The mental health evaluation method based on micro-expressions and physical indexes as set forth in claim 2, wherein: the facial resolution of the video clip can reach about 280 x 340 x 750 pixels by shooting with a high-speed video camera of 200-400FPS, and the CASME II dataset marks the micro-surfacing as 5 categories, namely Happiness (Happiness), nausea (Disgust), surprise (Surprise), depression (Repression), and Others (other), besides, the dataset needs to mark the starting point (Onset), peak point (Apex) and ending point (Offset) of the micro-expression activity.
4. The mental health evaluation method based on micro-expressions and physical indexes according to claim 2, wherein: and in the first step, when the expression of the user is collected, a short video of 10-15S is played for the user, the video content adopts life, entertainment, sports, music, delicacies, fashion and animation, and continuous collection is carried out through a camera according to the expression of the user of the video content.
5. The mental health evaluation method based on micro-expressions and physical indexes according to claim 1, wherein: in the second step, the color of the facial hole is related to the concentration of hemoglobin, the concentration of hemoglobin is changed due to the change of blood flow and blood components, so that the change of skin color is caused, the blood flow speed in blood vessels is increased, the flow rate is increased, the concentration of hemoglobin is increased, the face can show red color, the veins of the face comprise the superior trochanteric artery and vein, the superior orbital artery and vein, the sentinel vein, the superior palpebral vein, the lateral nasal artery and vein, the blood flow rate of 3-5 veins and branch blood vessels of the veins is selected to be monitored, and the hemoglobin content in blood of blood vessels is monitored.
6. The mental health evaluation method based on micro-expressions and physical indexes according to claim 1, wherein: when eye movement data are collected, an eye movement instrument with the frequency of 30-300Hz is used for continuously recording the eye movement track, each data point is identified as a time label or a coordinate form of (x, y) and is sent to a database of an analysis software program running on an adder connected with the eye movement, three gaze point filtering algorithms (ClearView, tobii, I-VT) are used for filtering the original data through the algorithms to obtain data serving as gaze points, the gaze point data are connected with user expression data, and a correlation diagram of the user expression and the eye movement data is established.
7. The mental health evaluation method based on micro-expressions and physical indexes according to claim 5, wherein: and in the second step, the judgment of the context is further included, a voice recognition system is used, the voice recognition system adopts a bidirectional long-short-time memory network (LSTM, longShortTermMemory), and the network can model long-time correlation of the voice, so that the recognition accuracy is improved, and the emotion of the user is judged according to the recognized context and the language of the user.
8. The mental health evaluation method based on micro-expressions and physical indexes according to claim 1, wherein: in the fourth step, the physiological signal detection based on remote measurement photoelectric pulse volume imaging is used for non-invasive method to obtain the heart rate, blood pressure and blood oxygen concentration change of the vital sign of the user, and the association relation between the heart rate, blood pressure and blood oxygen concentration change and emotion change is established, and in addition, the model is corrected by assisting in monitoring the skin electricity level, respiratory frequency and cardiovascular sympathetic nerve activation index.
9. The mental health evaluation method based on micro-expressions and physical indexes according to claim 7, wherein: in the fourth step, the multifunctional bracelet acquires heartbeat data of the user every 10-30S, blood pressure of the user is detected by using the blood detector every 20-40S, the used blood pressure data is obtained, and heartbeat blood pressure change conditions of the user in different expressions are obtained according to the heartbeat and the blood pressure data and the relation between the user expressions.
10. The mental health evaluation method based on micro-expressions and physical indexes according to claim 1, wherein: in step five, the user's micro-expression (W * ) Face color (M) * ) Eye movement (Y) * ) Physiological index (S) * ) The four aspects of data are combined, and each index is weighted according to different weights (a 1 ,a 2 ,a 3 ,a 4 ) Performing scoring calculation to obtain a comprehensive score (Z), wherein Z=a 1 W * +a 2 M * +a 3 Y * +a 4 S * 。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311174763.3A CN117153403A (en) | 2023-09-13 | 2023-09-13 | Mental health evaluation method based on micro-expressions and physical indexes |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311174763.3A CN117153403A (en) | 2023-09-13 | 2023-09-13 | Mental health evaluation method based on micro-expressions and physical indexes |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117153403A true CN117153403A (en) | 2023-12-01 |
Family
ID=88907839
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311174763.3A Pending CN117153403A (en) | 2023-09-13 | 2023-09-13 | Mental health evaluation method based on micro-expressions and physical indexes |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117153403A (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108937973A (en) * | 2018-06-15 | 2018-12-07 | 四川文理学院 | A kind of robotic diagnostic human body indignation mood method and device |
CN111259895A (en) * | 2020-02-21 | 2020-06-09 | 天津工业大学 | Emotion classification method and system based on facial blood flow distribution |
CN113486744A (en) * | 2021-06-24 | 2021-10-08 | 中国科学院西安光学精密机械研究所 | Student learning state evaluation system and method based on eye movement and facial expression paradigm |
CN113869229A (en) * | 2021-09-29 | 2021-12-31 | 电子科技大学 | Deep learning expression recognition method based on prior attention mechanism guidance |
CN113902774A (en) * | 2021-10-08 | 2022-01-07 | 无锡锡商银行股份有限公司 | Method for detecting facial expression of dense optical flow characteristics in video |
US20220270116A1 (en) * | 2021-02-24 | 2022-08-25 | Neil Fleischer | Methods to identify critical customer experience incidents using remotely captured eye-tracking recording combined with automatic facial emotion detection via mobile phone or webcams. |
CN115937953A (en) * | 2022-12-28 | 2023-04-07 | 中国科学院长春光学精密机械与物理研究所 | Psychological change detection method, device, equipment and storage medium |
CN116098621A (en) * | 2023-02-14 | 2023-05-12 | 平顶山学院 | Emotion face and physiological response recognition method based on attention mechanism |
CN116392122A (en) * | 2023-03-01 | 2023-07-07 | 中国人民解放军海军特色医学中心 | Mental stress level judging method for diver escape training based on micro-expressions |
CN116491944A (en) * | 2023-04-28 | 2023-07-28 | 江苏经贸职业技术学院 | Mental state monitoring and evaluating system based on intelligent bracelet |
-
2023
- 2023-09-13 CN CN202311174763.3A patent/CN117153403A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108937973A (en) * | 2018-06-15 | 2018-12-07 | 四川文理学院 | A kind of robotic diagnostic human body indignation mood method and device |
CN111259895A (en) * | 2020-02-21 | 2020-06-09 | 天津工业大学 | Emotion classification method and system based on facial blood flow distribution |
US20220270116A1 (en) * | 2021-02-24 | 2022-08-25 | Neil Fleischer | Methods to identify critical customer experience incidents using remotely captured eye-tracking recording combined with automatic facial emotion detection via mobile phone or webcams. |
CN113486744A (en) * | 2021-06-24 | 2021-10-08 | 中国科学院西安光学精密机械研究所 | Student learning state evaluation system and method based on eye movement and facial expression paradigm |
CN113869229A (en) * | 2021-09-29 | 2021-12-31 | 电子科技大学 | Deep learning expression recognition method based on prior attention mechanism guidance |
CN113902774A (en) * | 2021-10-08 | 2022-01-07 | 无锡锡商银行股份有限公司 | Method for detecting facial expression of dense optical flow characteristics in video |
CN115937953A (en) * | 2022-12-28 | 2023-04-07 | 中国科学院长春光学精密机械与物理研究所 | Psychological change detection method, device, equipment and storage medium |
CN116098621A (en) * | 2023-02-14 | 2023-05-12 | 平顶山学院 | Emotion face and physiological response recognition method based on attention mechanism |
CN116392122A (en) * | 2023-03-01 | 2023-07-07 | 中国人民解放军海军特色医学中心 | Mental stress level judging method for diver escape training based on micro-expressions |
CN116491944A (en) * | 2023-04-28 | 2023-07-28 | 江苏经贸职业技术学院 | Mental state monitoring and evaluating system based on intelligent bracelet |
Non-Patent Citations (2)
Title |
---|
徐峰;张军平;: "人脸微表情识别综述", 自动化学报, no. 03, 15 March 2017 (2017-03-15) * |
谢东亮;徐宇翔;: "基于人工智能的微表情识别技术", 科技与创新, no. 22, 25 November 2018 (2018-11-25) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11989340B2 (en) | Systems, methods, apparatuses and devices for detecting facial expression and for tracking movement and location in at least one of a virtual and augmented reality system | |
Zhang | Expression-EEG based collaborative multimodal emotion recognition using deep autoencoder | |
US11195316B2 (en) | System, method and apparatus for detecting facial expression in a virtual reality system | |
US20190025919A1 (en) | System, method and apparatus for detecting facial expression in an augmented reality system | |
Glowinski et al. | Technique for automatic emotion recognition by body gesture analysis | |
KR102277820B1 (en) | The psychological counseling system and the method thereof using the feeling information and response information | |
US20170364732A1 (en) | Eye tracking via patterned contact lenses | |
CN110287825B (en) | Tumble action detection method based on key skeleton point trajectory analysis | |
CN110335266B (en) | Intelligent traditional Chinese medicine visual inspection image processing method and device | |
CN106073793B (en) | Attitude Tracking and recognition methods based on micro-inertia sensor | |
US11328533B1 (en) | System, method and apparatus for detecting facial expression for motion capture | |
KR20060082677A (en) | A biometics system and method using electrocardiogram | |
Liu et al. | Recent advances in biometrics-based user authentication for wearable devices: A contemporary survey | |
WO2022141894A1 (en) | Three-dimensional feature emotion analysis method capable of fusing expression and limb motion | |
CN115101191A (en) | Parkinson disease diagnosis system | |
Mekruksavanich et al. | Badminton activity recognition and player assessment based on motion signals using deep residual network | |
CN109634407A (en) | It is a kind of based on control method multimode man-machine heat transfer agent synchronous acquisition and merged | |
CN117153403A (en) | Mental health evaluation method based on micro-expressions and physical indexes | |
Saeed | A survey of automatic person recognition using eye movements | |
CN219439095U (en) | Intelligent diagnosis equipment for early nerve function evaluation of infants | |
Ahmed et al. | An Improved Deep Learning Approach for Heart Attack Detection from Digital Images | |
Al-Rashid | A Three Steps Eye-Liveness Validation System | |
Wu | Robust Signal Processing Techniques for Wearable Inertial Measurement Unit (IMU) Sensors | |
Angerer | Stresserkennung mit Hilfe von Gesichtsausdrücke aus Videosequenzen: von Paul Angerer | |
Garg et al. | Non-Contact Based Method for Heart Rate Estimation and Attention Monitoring |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |