AU2020102556A4 - Psychological state analysis method based on facial micro-expression - Google Patents

Psychological state analysis method based on facial micro-expression Download PDF

Info

Publication number
AU2020102556A4
AU2020102556A4 AU2020102556A AU2020102556A AU2020102556A4 AU 2020102556 A4 AU2020102556 A4 AU 2020102556A4 AU 2020102556 A AU2020102556 A AU 2020102556A AU 2020102556 A AU2020102556 A AU 2020102556A AU 2020102556 A4 AU2020102556 A4 AU 2020102556A4
Authority
AU
Australia
Prior art keywords
expression
micro
method based
data
psychological state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
AU2020102556A
Inventor
Yuming Ci
Xu MA
Jiaying TU
Weiran Wang
Zhilin Zhao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tu Jiaying Miss
Wang Weiran Miss
Zhao Zhilin Miss
Original Assignee
Tu Jiaying Miss
Wang Weiran Miss
Zhao Zhilin Miss
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tu Jiaying Miss, Wang Weiran Miss, Zhao Zhilin Miss filed Critical Tu Jiaying Miss
Priority to AU2020102556A priority Critical patent/AU2020102556A4/en
Application granted granted Critical
Publication of AU2020102556A4 publication Critical patent/AU2020102556A4/en
Ceased legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/16Devices for psychotechnics; Testing reaction times ; Devices for evaluating the psychological state
    • A61B5/165Evaluating the state of mind, e.g. depression, anxiety
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/176Dynamic expression

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Psychiatry (AREA)
  • Biomedical Technology (AREA)
  • Human Computer Interaction (AREA)
  • Mathematical Physics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Public Health (AREA)
  • Animal Behavior & Ethology (AREA)
  • Multimedia (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Signal Processing (AREA)
  • Physiology (AREA)
  • Fuzzy Systems (AREA)
  • Child & Adolescent Psychology (AREA)
  • Developmental Disabilities (AREA)
  • Educational Technology (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)

Abstract

Our project can introduce self-supervising data of images rotating and spatial attention mechanism. The project can further enhance the recognition accuracy of micro expression recognition, based on optical flow method for data augmentation. To be specific, in MGEC2019 fusion data set, both weightless F-i score and weightless average recall rate can reach up to 80%. 1

Description

DESCRIPTION TITLE
Psychological state analysis method based on facial micro-expression
FIELD OF THE INVENTION
The present invention relates to Psychological state analysis method
based on facial micro-expression.
BACKGROUND
Facial expression can express people's psychological changes clearly,
so it has great potential in personal use, commercial use and government
application. For example, at the exit of the subway, thousands of people
enter and exit every day. Although relevant personnel will carry out
security checks one by one, they are not willing to exclude all high-risk
personnel, who will endanger the safety of others. Therefore, it is very
necessary to have an instrument that can analyze human psychological
state based on facial micro expression to help security inspection. Such
equipment can quickly judge whether it has the intention of attack
according to the micro expression, so as to better protect the national
security. Secondly, in the field of education, teachers can improve the
classroom efficiency according to the similar equipment, and
appropriately mobilize students according to the psychological state of students judged by the equipment, so as to fully improve students' learning enthusiasm. In addition, in medicine, doctors can understand the real psychology of patients according to the micro expression, so as to suit the medicine to the case and improve the treatment efficiency.
This research project is based on the mapping relationship between a
large number of micro expressions and mental states. Micro expression is
a kind of human behavior information, which can be collected by external
devices. Then intelligent machines will judge their mental state through
the corresponding mapping relationship. Human micro expression often
reveals some real emotions that are trying to hide. It is a very effective
non-verbal clue, which can make it a window that I don't understand the
processing process of human real emotions and internal emotions. For
example, when human beings are in fear and worry, their micro
expressions are often everted nostrils and tight lips; when they are in self
blame and shame, the corners of their mouths droop and their chin rises.
However, mental state is also related to many other factors, this topic will
be based on the algorithm used to solve.
The research objectives of unsupervised learning is to learn a robust
image feature representation through self-supervised labels using
abundant unsupervised data.One of the major issues in the micro-expression recognition task is that the amount of data is small.
Therefore, the introduction of unsupervised learning tasks to enhance the
feature extraction learning will play an auxiliary role in the recognition of
micro-expressions.In addition, unsupervised learning does not introduce
new annotation data and no new labor costs will be introduced.Rotating
the optical flow image at some fixed angles does not change the
micro-expression category. Therefore, some fixed rotation angles are
designed as a self-supervised label and added to the model
training.Secondly, because the recognition of micro expressions is based
on some specific positions, such as eyes and mouth, the introduction of a
spatial attention mechanism to highlight the position that plays a key role
in the recognition of micro expressions will help the recognition effect.
The purpose of this project is to introduce the self-supervised
information and attention mechanism of image rotation, and further
improve the recognition accuracy of micro-expressions in view of the
optical flow method based on data augmentation.Specifically, on the
MGEC2019 fusion data set, both the unweighted F-i score and the
unweighted average recall rate reached 80%.
SUMMARY
There are two solutions for facial expression recognition: methods
based on image and based on space and time. Image-based methods
include LBP methods based on manual local key feature point LBP, and
on capsule network-based convolutional neural network features and so
on. Spacetime-based methods include two-time-scale convolutional
neural networks methods, long-term iterative convolutional networks, and
methods based on optical flow. The optical flow-based method takes the
change of micro-expression as input in the form of optical flow, while the
image-based method only considers the content of the image itself,
ignoring the change information of the micro-expression. So the method
based on optical flow have more advantages than the method based on
image. The current optical flow-based methods mainly include better
feature extraction methods (such as dual inception, shallow ternary flow
convolutional neural network, etc.) and data augmentation methods (such
as small motion amplification, domain adaptation methods, etc.) ). Due to
the small amount of micro-expression data (only 442 samples in the
MGEC2019 integrated data). Hence the optical flow method based on
data augmentation has achieved good results.
DESCRIPTION OF PREFERRED EMBODIMENT
A kind of unsupervised learning algorithm that is widely used is called
K-means algorithm, among which K is the number of clusters that users specified to create. K-means algorithm is start with K random centroids.
The algorithm can calculate distances of every point from centroid. Every
point will be distributed to the nearest cluster centroid. Then repeating the
above process several times until cluster centroids stop changing. The
realization idea is setting K cluster centroid points, classifying according
to the principle of nearest, recalculate centroid, repeating the above
process until there is no change.
The research purpose of unsupervised learning is making use of the
abundant unsupervised data to learn image feature representation of a
robust with the help of self-supervising labels.
The major problem of micro expression recognition task is that the data
volume is small, and therefore introducing unsupervised learning to
enhance the feature extraction learning can assist us with recognition of
micro expressions. Moreover, unsupervised learning don' t introduce new
label data and new labor cost. Rotating the optical flow images at a
constant angle don' t change the category of micro expressions, thus we
can set some constant rotating angles as self-supervising labels and add to
model training. Besides, considering that recognition of micro
expressions is based on a number of specific areas, such as mouse and
eyes, we can introduce spatial attention mechanism. Highlighting the key position of micro expression recognition can help improving the recognition effects.
Our project can introduce self-supervising data of images rotating and
spatial attention mechanism. The project can further enhance the
recognition accuracy of micro expression recognition, based on optical
flow method for data augmentation. To be specific, in MGEC2019 fusion
data set, both weightless F-i score and weightless average recall rate can
reach up to 80%.

Claims (1)

CLAIM
1. Psychological state analysis method based on facial micro-expression,
characterized in that, including: a kind of unsupervised learning
algorithm that is widely used is called K-means algorithm, among
which K is the number of clusters that users specified to create;
K-means algorithm is start with K random centroids; the algorithm
can calculate distances of every point from centroid.
AU2020102556A 2020-10-01 2020-10-01 Psychological state analysis method based on facial micro-expression Ceased AU2020102556A4 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2020102556A AU2020102556A4 (en) 2020-10-01 2020-10-01 Psychological state analysis method based on facial micro-expression

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
AU2020102556A AU2020102556A4 (en) 2020-10-01 2020-10-01 Psychological state analysis method based on facial micro-expression

Publications (1)

Publication Number Publication Date
AU2020102556A4 true AU2020102556A4 (en) 2020-11-19

Family

ID=73249749

Family Applications (1)

Application Number Title Priority Date Filing Date
AU2020102556A Ceased AU2020102556A4 (en) 2020-10-01 2020-10-01 Psychological state analysis method based on facial micro-expression

Country Status (1)

Country Link
AU (1) AU2020102556A4 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112690793A (en) * 2020-12-28 2021-04-23 中国人民解放军战略支援部队信息工程大学 Emotion electroencephalogram migration model training method and system and emotion recognition method and equipment
CN112766159A (en) * 2021-01-20 2021-05-07 重庆邮电大学 Cross-database micro-expression identification method based on multi-feature fusion
CN112800941A (en) * 2021-01-26 2021-05-14 中科人工智能创新技术研究院(青岛)有限公司 Face anti-fraud method and system based on asymmetric auxiliary information embedded network
CN113486863A (en) * 2021-08-20 2021-10-08 西南大学 Expression recognition method and device
CN116311483A (en) * 2023-05-24 2023-06-23 山东科技大学 Micro-expression recognition method based on local facial area reconstruction and memory contrast learning
CN116869530A (en) * 2023-05-31 2023-10-13 厦门纳智壳生物科技有限公司 Method and device for detecting mental health of residents by using human body micro-expressions and physiological indexes

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112690793A (en) * 2020-12-28 2021-04-23 中国人民解放军战略支援部队信息工程大学 Emotion electroencephalogram migration model training method and system and emotion recognition method and equipment
CN112690793B (en) * 2020-12-28 2023-05-16 中国人民解放军战略支援部队信息工程大学 Emotion electroencephalogram migration model training method and system and emotion recognition method and equipment
CN112766159A (en) * 2021-01-20 2021-05-07 重庆邮电大学 Cross-database micro-expression identification method based on multi-feature fusion
CN112800941A (en) * 2021-01-26 2021-05-14 中科人工智能创新技术研究院(青岛)有限公司 Face anti-fraud method and system based on asymmetric auxiliary information embedded network
CN113486863A (en) * 2021-08-20 2021-10-08 西南大学 Expression recognition method and device
CN116311483A (en) * 2023-05-24 2023-06-23 山东科技大学 Micro-expression recognition method based on local facial area reconstruction and memory contrast learning
CN116311483B (en) * 2023-05-24 2023-08-01 山东科技大学 Micro-expression recognition method based on local facial area reconstruction and memory contrast learning
CN116869530A (en) * 2023-05-31 2023-10-13 厦门纳智壳生物科技有限公司 Method and device for detecting mental health of residents by using human body micro-expressions and physiological indexes

Similar Documents

Publication Publication Date Title
AU2020102556A4 (en) Psychological state analysis method based on facial micro-expression
Ye et al. Recognizing american sign language gestures from within continuous videos
Shivashankara et al. American sign language recognition system: an optimal approach
Zheng et al. Recent advances of deep learning for sign language recognition
More et al. Sign language recognition using image processing
Berg et al. How do you tell a blackbird from a crow?
Deshmukh et al. Facial emotion recognition system through machine learning approach
Balasuriya et al. Learning platform for visually impaired children through artificial intelligence and computer vision
Yang et al. Analysis of interaction attitudes using data-driven hand gesture phrases
Singh et al. A Review For Different Sign Language Recognition Systems
Rozaliev et al. Methods and Models for Identifying Human Emotions by Recognition Gestures and Motion
Jain et al. Study for emotion recognition of different age groups students during online class
Dreuw et al. The signspeak project-bridging the gap between signers and speakers
Sandjaja et al. Sign language number recognition
Sun et al. The exploration of facial expression recognition in distance education learning system
Lungociu REAL TIME SIGN LANGUAGE RECOGNITION USING ARTIFICIAL NEURAL NETWORKS.
Dabwan et al. Recognition of American Sign Language Using Deep Convolution Network
Yang et al. Research on multimodal affective computing oriented to online collaborative learning
Moustafa et al. Integrated Mediapipe with a CNN Model for Arabic Sign Language Recognition
Enikeev et al. Russian Fingerspelling Recognition Using Leap Motion Controller
Dembani et al. UNSUPERVISED FACIAL EXPRESSION DETECTION USING GENETIC ALGORITHM.
Vidalón et al. Continuous sign recognition of brazilian sign language in a healthcare setting
Petkar et al. Real Time Sign Language Recognition System for Hearing and Speech Impaired People
Rai et al. Gesture recognition system
Shetty et al. Real-Time Translation of Sign Language for Speech Impaired

Legal Events

Date Code Title Description
FGI Letters patent sealed or granted (innovation patent)
MK22 Patent ceased section 143a(d), or expired - non payment of renewal fee or expiry