CN109034079B - Facial expression recognition method for non-standard posture of human face - Google Patents

Facial expression recognition method for non-standard posture of human face Download PDF

Info

Publication number
CN109034079B
CN109034079B CN201810865356.XA CN201810865356A CN109034079B CN 109034079 B CN109034079 B CN 109034079B CN 201810865356 A CN201810865356 A CN 201810865356A CN 109034079 B CN109034079 B CN 109034079B
Authority
CN
China
Prior art keywords
layer
vector
expression
representing
weight
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810865356.XA
Other languages
Chinese (zh)
Other versions
CN109034079A (en
Inventor
李�瑞
王儒敬
宋全军
谢成军
张洁
陈天娇
陈红波
胡海瀛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Institutes of Physical Science of CAS
Original Assignee
Hefei Institutes of Physical Science of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Institutes of Physical Science of CAS filed Critical Hefei Institutes of Physical Science of CAS
Priority to CN201810865356.XA priority Critical patent/CN109034079B/en
Publication of CN109034079A publication Critical patent/CN109034079A/en
Application granted granted Critical
Publication of CN109034079B publication Critical patent/CN109034079B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/175Static expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Abstract

The invention relates to a facial expression recognition method for a human face in a non-standard posture, which overcomes the defect that expression recognition cannot be carried out under the condition that facial expression information is incomplete compared with the prior art. The invention comprises the following steps: collecting and preprocessing training images; constructing a classification model; collecting and preprocessing an image to be detected; recognition of facial expressions. The facial expression recognition method based on the facial expression prediction technology predicts the facial expression under the condition that facial expression information is incomplete, so that the facial expression recognition can be carried out on faces in different angles.

Description

Facial expression recognition method for non-standard posture of human face
Technical Field
The invention relates to the technical field of image recognition, in particular to a facial expression recognition method for a human face under a non-standard posture.
Background
The computer vision technology and the pattern recognition technology are solutions for making a computer according to the judgment of human facial expressions, and become research focus of many scientists. In medical terms, if the computer can effectively analyze the expression function of the patient, the computer can sooth the patient or perform further treatment according to the change of the expression of the patient, thereby relieving the pain of the patient psychologically and physiologically. In life, if the computer can effectively identify the joy, anger, sadness and fun of the depressed children, the psychological burden of families can be greatly relieved.
In the prior art, facial expression recognition can be realized only based on the judgment of the whole face in a standard shooting state. In practical applications, the whole face of the person to be recognized is required to be directed to the image pickup device in order to acquire the whole face information. However, in the application of face recognition to the medical field, it is difficult to obtain face information of a person to be recognized in a comprehensive manner. Some technicians also propose that facial expressions can be predicted under the condition of acquiring partial facial information based on a prediction analysis technology, but the prediction only stays in a theoretical stage, and the prediction accuracy is very low.
Therefore, how to recognize facial expressions in the non-standard postures of the human face becomes an urgent technical problem to be solved.
Disclosure of Invention
The invention aims to solve the defect that expression recognition cannot be performed under the condition that facial expression information is incomplete in the prior art, and provides a facial expression recognition method for a human face in a non-standard posture to solve the problem.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a facial expression recognition method for a human face under a nonstandard posture comprises the following steps:
11) collecting and preprocessing training images, namely collecting 7 expression samples of happiness, anger, hurt, surprise, aversion, fear and neutrality, wherein the 7 expression samples are facial expressions in non-standard postures, not less than 100 expression samples of each type are used as training images, and performing histogram equalization and normalization processing on all the training images;
12) constructing a classification model, namely extracting 7 expression spatial information characteristics and constructing the classification model based on the spatial information characteristics;
13) collecting and preprocessing an image to be detected, collecting facial expressions to be recognized in a very standard posture shot by collecting equipment, and performing histogram equalization and normalization processing on the image to be detected to generate a test sample;
14) and identifying the facial expression, namely inputting the preprocessed test sample into a spatial feature network model to automatically identify the facial expression.
The construction of the training model comprises the following steps:
21) vectorization of 7 expression training sample images:
Figure BDA0001750793080000021
wherein F represents the spatial information characteristics of the 7 expression training sets, and FijA feature vector representing the ith expression, jth image, where i is 1,2 …, 7; j-1, 2 …, n; n training sets are provided;
22) vectorizing the expression space features, wherein the direction of a vector represents the space information of facial expressions, and the length of the vector represents the category probability of the expressions;
23) calculating the weight of the spatial feature network model extraction layer,
inputting the vectorized facial expression features into a first layer of convolution network, performing nonlinear normalization, and performing convolution operation by adopting a kernel function of 11 × 11; inputting the feature vector after convolution operation into a first spatial feature network with N neuron layers, obtaining weights of the N neuron layers after iteration through a dynamic path planning method, and outputting the spatial feature vector;
24) calculating the weight of the spatial feature network model classification layer,
and inputting the spatial feature vectors into a spatial network classification layer, wherein the spatial feature network is provided with 7 neuron layers, and the weight of each neuron layer is obtained by calculating the loss function of each neuron layer, so that a classification model is obtained.
The vectorization processing of the expression space features comprises the following steps:
31) converting the ith picture containing expression space characteristics into a vector s with dimension of w multiplied by hi
Wherein w is the width of the image and h is the height of the image;
32) inputting a vector siScaling scale normalization, which is expressed as follows:
Figure BDA0001750793080000031
wherein, ViRepresenting the feature vector of the normalized ith picture, and sqrt representing the square-open function.
The calculation of the weight of the spatial feature network model extraction layer comprises the following steps:
41) calculating the correlation coefficient from the i layer to the j layer:
Figure BDA0001750793080000032
wherein, aijRepresents a constant from the i-th layer to the j-th layer, and the initial value is 0; a isijWill change as the weight is iteratively updated, aijRepresents a constant from the i-th layer to the k-th layer;
Figure BDA0001750793080000033
represents the softmax function; k denotes a k-th layer network, d denotes an offset;
42) computing a prediction vector from i-th layer to j-th layer
Figure BDA0001750793080000034
Wherein the content of the first and second substances,
Figure BDA0001750793080000035
represents a prediction vector from the i-th layer to the j-th layer; w is aijRepresenting the weight from the ith layer to the j layer; v. ofiA normalized vector representing the input;
43) computing an activation vector s for layer jj
Figure BDA0001750793080000036
Wherein
Figure BDA0001750793080000037
Represents a prediction vector from the i-th layer to the j-th layer; covijRepresenting the correlation coefficients from the ith layer to the j layer;
Figure BDA0001750793080000041
represents a prediction vector from the i-th layer to the j-th layer;
44) calculating a normalized vector of j layers:
Figure BDA0001750793080000042
wherein s isjRepresenting an activation vector.
45) Repeating steps 41 to 44) continuously adjusting aij,cij,wijValue up to WijUntil convergence.
The method for calculating the weight of the spatial feature network model classification layer comprises the following steps:
51) calculating the error between the predicted value and the true value:
Figure BDA0001750793080000043
wherein E islossRepresenting the error between the predicted value and the true value; fiActual value, W, representing the ith sub-pictureiRepresents the classification weight, V, of the ith sub-imageiRepresenting a feature vector, N representing the total number of images trained;
52) constantly changing WiValue of ElossValue up to 0.01, calculated WiI.e. the weight of the classification layer.
Advantageous effects
Compared with the prior art, the facial expression recognition method for the non-standard posture of the face predicts the facial expression under the condition that facial expression information is incomplete based on a prediction analysis technology so as to realize the expression recognition for the faces in different angles.
The method eliminates the influence of illumination on the recognition of facial expressions through pretreatment, and simplifies the complex environment; and then 7 expressions are identified by constructing a spatial feature model, so that not only can the category of each expression be obtained, but also the probability of each expression can be analyzed, and the real emotion calculation is realized. The method can directly identify the facial expressions under the nonstandard postures at different angles, and improves the robustness and accuracy of facial expression identification.
Drawings
FIG. 1 is a sequence diagram of the method of the present invention;
FIG. 2 is a diagram of facial expression recognition results using a gabor feature + SVM classification method;
fig. 3 is a facial expression recognition result diagram of the method of the present invention.
Detailed Description
So that the manner in which the above recited features of the present invention can be understood and readily understood, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings, wherein:
as shown in fig. 1, the method for recognizing facial expressions under non-standard postures of human faces according to the present invention includes the following steps:
in the first step, the collection and pre-processing of training images. 7 expression samples of happiness, anger, hurry, surprise, aversion, fear and neutrality are collected, the 7 expression samples are facial expressions in non-standard postures, no less than 100 expression samples of each type are used as training images, and histogram equalization and normalization processing are carried out on all the training images.
And secondly, constructing a classification model, extracting 7 expression spatial information characteristics, and constructing the classification model based on the spatial information characteristics. The structure of the classification model is a training weight, and the spatial model is constructed for predictive analysis through the training of the weight. The method comprises the following specific steps:
(1) vectorization of 7 expression training sample images:
Figure BDA0001750793080000051
wherein F represents the spatial information characteristics of the 7 expression training sets, and FijA feature vector representing the ith expression, jth image, where i is 1,2 …, 7; j-1, 2 …, n; there are n training sets.
(2) Vectorizing the expression space features, wherein the direction of the vector represents the space information of the facial expression, and the length of the vector represents the category probability of the expression. The method comprises the following steps:
a1, converting the ith picture containing expression space characteristics into a vector s with dimension w multiplied by hi
Where w is the width of the image and h is the height of the image.
A2, inputting vector siScaling scale normalization, which is expressed as follows:
Figure BDA0001750793080000052
wherein, ViRepresenting the feature vector of the normalized ith picture, and sqrt representing the square-open function.
(3) And calculating the weight of the spatial feature network model extraction layer. Inputting the vectorized facial expression features into a first layer of convolution network, performing nonlinear normalization, and performing convolution operation by adopting a kernel function of 11 × 11; inputting the feature vector after convolution operation into a first space feature network with N neuron layers, obtaining the weight values of the N neuron layers after iteration through a dynamic path planning method, and outputting the space feature vector.
Here, in order to distinguish the correlation between different expressions and to be able to calculate the probability of each expression, a softmax () function is used to calculate the correlation parameter. The method comprises the following specific steps:
b1, calculating the correlation coefficient from the i layer to the j layer:
Figure BDA0001750793080000061
wherein, aijRepresents a constant from the i-th layer to the j-th layer, and the initial value is 0; a isijWill change as the weight is iteratively updated, aijRepresents a constant from the i-th layer to the k-th layer;
Figure BDA0001750793080000062
represents the softmax function; k denotes a k-th layer network and d denotes an offset.
B2, calculating the prediction vector from the i-th layer to the j-th layer
Figure BDA0001750793080000063
Wherein
Figure BDA0001750793080000064
Represents a prediction vector from the i-th layer to the j-th layer; w is aijRepresenting the weight from the ith layer to the j layer; v. ofiA normalized vector representing the input.
B3, calculating an activation vector s of a j layerj
Figure BDA0001750793080000065
Wherein
Figure BDA0001750793080000066
Represents a prediction vector from the i-th layer to the j-th layer; covijRepresenting the correlation coefficients from the ith layer to the j layer;
Figure BDA0001750793080000067
representing the prediction vector from the i-th layer to the j-th layer.
B4, calculating a normalized vector of j layers:
Figure BDA0001750793080000071
wherein s isjRepresenting an activation vector.
B5) Repeating the steps B1 to B4, and continuously adjusting aij,cij,wijValue up to WijUntil convergence.
(4) And calculating the weight of the spatial feature network model classification layer.
And inputting the space feature vector features into a space network classification layer, wherein the space feature network is provided with 7 neuron layers, and the weight of each neuron layer is obtained by calculating the loss function of each neuron layer, so that a classification model is obtained. The method comprises the following specific steps:
c1, calculating the error between the predicted value and the true value:
Figure BDA0001750793080000072
wherein E islossRepresenting the error between the predicted value and the true value; fiActual value, W, representing the ith sub-pictureiRepresents the classification weight, V, of the ith sub-imageiRepresenting a feature vector, N representing the total number of images trained;
c2, constantly changing WiValue of ElossValue up to 0.01, calculated WiI.e. the weight of the classification layer.
And thirdly, collecting and preprocessing the image to be detected.
And collecting facial expressions to be recognized in the very standard postures shot by the acquisition equipment, and performing histogram equalization and normalization processing on the picture to be detected to generate a test sample.
And fourthly, recognizing the facial expression, namely inputting the preprocessed test sample into the spatial feature network model to automatically recognize the facial expression.
As shown in fig. 2, it is the result of recognizing the side face expression by using the gabor feature + SVM classification method, and the result shows a neutral expression, mainly because it relies on the side face contour recognition, resulting in a high error rate.
As shown in fig. 3, it is the result of the identification by the spatial feature model method of the present invention, and the result is happy, mainly because: through calculation (training) of the weight, the spatial feature model can obtain the category of each expression and analyze the probability of each expression, so that real emotion calculation is realized.
As shown in Table 1, the data analysis results of the gabor characteristic + SVM classification method and the method of the invention are shown in Table 1, and the identification rate and the accuracy rate of the invention are higher for the same test sample number.
TABLE 1 comparison table of expression recognition results of abor characteristic + SVM classification method and method of the present invention
Figure BDA0001750793080000081
By comparing the two methods, the spatial feature model can well and correctly identify the facial expressions of different angles, and the robustness is good.
The foregoing shows and describes the general principles, essential features, and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are merely illustrative of the principles of the invention, but that various changes and modifications may be made without departing from the spirit and scope of the invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (3)

1. A facial expression recognition method for a human face under a non-standard posture is characterized by comprising the following steps:
11) collecting and preprocessing training images, namely collecting 7 expression samples of happiness, anger, hurt, surprise, aversion, fear and neutrality, wherein the 7 expression samples are facial expressions in non-standard postures, not less than 100 expression samples of each type are used as training images, and performing histogram equalization and normalization processing on all the training images;
12) constructing a classification model, namely extracting 7 expression spatial information characteristics and constructing the classification model based on the spatial information characteristics; the construction of the classification model comprises the following steps:
121) vectorization of 7 expression training sample images:
Figure FDA0003349286570000011
wherein F represents the spatial information characteristics of the 7 expression training sets, and FijA feature vector representing the ith expression, jth image, where i is 1,2 …, 7; j-1, 2 …, n; n training sets are provided;
122) vectorizing the expression space features, wherein the direction of a vector represents the space information of facial expressions, and the length of the vector represents the category probability of the expressions;
the vectorization processing of the expression space features comprises the following steps:
1221) converting the ith picture containing expression space characteristics into a vector s with dimension of w multiplied by hi
Wherein w is the width of the image and h is the height of the image;
1222) inputting a vector siScaling scale normalization, which is expressed as follows:
Figure FDA0003349286570000012
wherein, ViRepresenting a feature vector of the normalized ith picture, and sqrt representing a square-open function;
123) calculating the weight of the spatial feature network model extraction layer,
inputting the vectorized facial expression features into a first layer of convolution network, performing nonlinear normalization, and performing convolution operation by adopting a kernel function of 11 × 11; inputting the feature vector after convolution operation into a first spatial feature network with N neuron layers, obtaining weights of the N neuron layers after iteration through a dynamic path planning method, and outputting the spatial feature vector;
124) calculating the weight of the spatial feature network model classification layer,
inputting the space feature vectors into a space network classification layer, wherein the space feature network is provided with 7 neuron layers, and the weight of each neuron layer is obtained by calculating a loss function of each neuron layer so as to obtain a classification model;
13) collecting and preprocessing an image to be detected, collecting facial expressions to be recognized in a very standard posture shot by collecting equipment, and performing histogram equalization and normalization processing on the image to be detected to generate a test sample;
14) and identifying the facial expression, namely inputting the preprocessed test sample into a spatial feature network model to automatically identify the facial expression.
2. The method according to claim 1, wherein the calculating the weight of the spatial feature network model extraction layer comprises the following steps:
21) calculating the correlation coefficient from the i layer to the j layer:
Figure FDA0003349286570000021
wherein, aijRepresents a constant from the i-th layer to the j-th layer, and the initial value is 0; a isijWill change as the weight is iteratively updated, aikRepresents a constant from the i-th layer to the k-th layer;
Figure FDA0003349286570000022
sofmax (·) denotes the softmax function; k denotes a k-th layer network, d denotes an offset;
22) computing a prediction vector from i-th layer to j-th layer
Figure FDA0003349286570000031
Wherein the content of the first and second substances,
Figure FDA0003349286570000032
represents a prediction vector from the i-th layer to the j-th layer; w is aijRepresenting the weight from the ith layer to the j layer; v. ofiA normalized vector representing the input;
23) computing an activation vector s for layer jj
Figure FDA0003349286570000033
Wherein
Figure FDA0003349286570000034
Represents a prediction vector from the i-th layer to the j-th layer; covijRepresenting the correlation coefficients from the ith layer to the j layer;
Figure FDA0003349286570000035
represents a prediction vector from the i-th layer to the j-th layer;
24) calculating a normalized vector of j layers:
Figure FDA0003349286570000036
wherein s isjRepresenting an activation vector;
25) repeating steps 21 to 24) to continuously adjust aij,cij,wijValue up to WijUntil convergence.
3. The method for recognizing facial expressions under non-standard postures as claimed in claim 1, wherein the step of calculating the weight of the spatial feature network model classification layer comprises the following steps:
31) calculating the error between the predicted value and the true value:
Figure FDA0003349286570000037
wherein E islossRepresenting the error between the predicted value and the true value; fiTrue value representing ith sub-picture,WiRepresents the classification weight, V, of the ith sub-imageiRepresenting a feature vector, N representing the total number of images trained;
32) constantly changing WiValue of ElossValue up to 0.01, calculated WiI.e. the weight of the classification layer.
CN201810865356.XA 2018-08-01 2018-08-01 Facial expression recognition method for non-standard posture of human face Active CN109034079B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810865356.XA CN109034079B (en) 2018-08-01 2018-08-01 Facial expression recognition method for non-standard posture of human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810865356.XA CN109034079B (en) 2018-08-01 2018-08-01 Facial expression recognition method for non-standard posture of human face

Publications (2)

Publication Number Publication Date
CN109034079A CN109034079A (en) 2018-12-18
CN109034079B true CN109034079B (en) 2022-03-11

Family

ID=64648506

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810865356.XA Active CN109034079B (en) 2018-08-01 2018-08-01 Facial expression recognition method for non-standard posture of human face

Country Status (1)

Country Link
CN (1) CN109034079B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111128369A (en) * 2019-11-18 2020-05-08 创新工场(北京)企业管理股份有限公司 Method and device for evaluating Parkinson's disease condition of patient

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956570A (en) * 2016-05-11 2016-09-21 电子科技大学 Lip characteristic and deep learning based smiling face recognition method
CN107742117A (en) * 2017-11-15 2018-02-27 北京工业大学 A kind of facial expression recognizing method based on end to end model

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101877981B1 (en) * 2011-12-21 2018-07-12 한국전자통신연구원 System for recognizing disguised face using gabor feature and svm classifier and method thereof
CN104077579B (en) * 2014-07-14 2017-07-04 上海工程技术大学 Facial expression recognition method based on expert system
CN105654049B (en) * 2015-12-29 2019-08-16 中国科学院深圳先进技术研究院 The method and device of facial expression recognition
KR102221118B1 (en) * 2016-02-16 2021-02-26 삼성전자주식회사 Method for extracting feature of image to recognize object
CN105825183B (en) * 2016-03-14 2019-02-12 合肥工业大学 Facial expression recognizing method based on partial occlusion image
CN107871098B (en) * 2016-09-23 2021-04-13 北京眼神科技有限公司 Method and device for acquiring human face characteristic points
CN106682616B (en) * 2016-12-28 2020-04-21 南京邮电大学 Method for recognizing neonatal pain expression based on two-channel feature deep learning
CN107316059B (en) * 2017-06-16 2020-07-28 陕西师范大学 Learner gesture recognition method
CN107437100A (en) * 2017-08-08 2017-12-05 重庆邮电大学 A kind of picture position Forecasting Methodology based on the association study of cross-module state
CN107977650B (en) * 2017-12-21 2019-08-23 北京华捷艾米科技有限公司 Method for detecting human face and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956570A (en) * 2016-05-11 2016-09-21 电子科技大学 Lip characteristic and deep learning based smiling face recognition method
CN107742117A (en) * 2017-11-15 2018-02-27 北京工业大学 A kind of facial expression recognizing method based on end to end model

Also Published As

Publication number Publication date
CN109034079A (en) 2018-12-18

Similar Documents

Publication Publication Date Title
CN108596039B (en) Bimodal emotion recognition method and system based on 3D convolutional neural network
CN108615010B (en) Facial expression recognition method based on parallel convolution neural network feature map fusion
CN106682616B (en) Method for recognizing neonatal pain expression based on two-channel feature deep learning
CN108491077B (en) Surface electromyographic signal gesture recognition method based on multi-stream divide-and-conquer convolutional neural network
CN112784763B (en) Expression recognition method and system based on local and overall feature adaptive fusion
CN112784798A (en) Multi-modal emotion recognition method based on feature-time attention mechanism
CN109472247B (en) Face recognition method based on deep learning non-fit type
CN112580590A (en) Finger vein identification method based on multi-semantic feature fusion network
CN113749657B (en) Brain electricity emotion recognition method based on multi-task capsule
CN111967354B (en) Depression tendency identification method based on multi-mode characteristics of limbs and micro-expressions
CN112766355A (en) Electroencephalogram signal emotion recognition method under label noise
CN112380924B (en) Depression tendency detection method based on facial micro expression dynamic recognition
CN109344720B (en) Emotional state detection method based on self-adaptive feature selection
CN112101096A (en) Suicide emotion perception method based on multi-mode fusion of voice and micro-expression
He et al. Identification of finger vein using neural network recognition research based on PCA
CN114511912A (en) Cross-library micro-expression recognition method and device based on double-current convolutional neural network
CN116110089A (en) Facial expression recognition method based on depth self-adaptive metric learning
CN110210380B (en) Analysis method for generating character based on expression recognition and psychological test
CN111259759A (en) Cross-database micro-expression recognition method and device based on domain selection migration regression
CN109034079B (en) Facial expression recognition method for non-standard posture of human face
CN113627391A (en) Cross-mode electroencephalogram signal identification method considering individual difference
Liu et al. Facial expression recognition for in-the-wild videos
CN113255543A (en) Facial expression recognition method based on graph convolution network
CN113128353A (en) Emotion sensing method and system for natural human-computer interaction
CN117198468A (en) Intervention scheme intelligent management system based on behavior recognition and data analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant