CN112115847A - Method for judging face emotion joyfulness - Google Patents

Method for judging face emotion joyfulness Download PDF

Info

Publication number
CN112115847A
CN112115847A CN202010971949.1A CN202010971949A CN112115847A CN 112115847 A CN112115847 A CN 112115847A CN 202010971949 A CN202010971949 A CN 202010971949A CN 112115847 A CN112115847 A CN 112115847A
Authority
CN
China
Prior art keywords
pleasure
value
judging
degree
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010971949.1A
Other languages
Chinese (zh)
Other versions
CN112115847B (en
Inventor
雷李义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Image Data Technology Co ltd
Original Assignee
Shenzhen Image Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Image Data Technology Co ltd filed Critical Shenzhen Image Data Technology Co ltd
Priority to CN202010971949.1A priority Critical patent/CN112115847B/en
Publication of CN112115847A publication Critical patent/CN112115847A/en
Application granted granted Critical
Publication of CN112115847B publication Critical patent/CN112115847B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The method for judging the face emotion pleasure degree comprises the following steps: step S1: preprocessing a face image: adjusting the size of the intercepted face image and normalizing the image pixels; step S2: extracting the features of the preprocessed face image; step S3: performing global average pooling on the extracted feature vectors, and performing dimension reduction processing; step S4: performing characteristic regression on the reduced-dimension characteristic vector, and outputting a one-dimensional characteristic numerical value; step S5: and dynamically determining a pleasure degree reference value, judging the pleasure degree state according to the determined pleasure degree reference value and the one-dimensional characteristic numerical value, and outputting a pleasure degree representation value. The invention solves the technical problem that the emotion of the user can be classified only according to the expression and the change degree of the emotion of the user cannot be judged in the prior art.

Description

Method for judging face emotion joyfulness
Technical Field
The invention relates to the technical field of face recognition, in particular to a method for judging the face emotion pleasure degree.
Background
The expression of the human face can reflect the emotional state of the human to a certain extent. With the development and progress of artificial intelligence technology, technology for judging user emotion according to facial expressions is widely applied to social life, and dimensionality of acquired user information is enriched. However, most of the existing applications can only classify the emotion of the user according to the expression, for example, the emotion is classified into categories such as calm, happy, sad and surprise, and the degree of the emotion of the user cannot be judged, for example, how happy and sad the user is, and the change of the emotion of the user cannot be judged according to the slight change of the expression of the user, for example, the process of judging that the user becomes more happy due to sadness, or becomes more sadness under happy mood, and the like.
Disclosure of Invention
The invention aims to provide a method for judging the joyfulness of face emotion, which solves the technical problem that the emotion of a user can be classified only according to expressions and the change degree of the emotion of the user cannot be judged in the prior art.
The method for judging the face emotion pleasure degree comprises the following steps:
step S1: preprocessing a face image: adjusting the size of the intercepted face image and normalizing the image pixels;
step S2: extracting the features of the preprocessed face image;
step S3: performing global average pooling on the extracted feature vectors, and performing dimension reduction processing;
step S4: performing characteristic regression on the reduced-dimension characteristic vector, and outputting a one-dimensional characteristic numerical value;
step S5: and dynamically determining a pleasure degree reference value, judging the pleasure degree state according to the determined pleasure degree reference value and the one-dimensional characteristic numerical value, and outputting a pleasure degree representation value.
The method for judging the face emotional pleasure degree outputs a one-dimensional characteristic numerical value by performing characteristic regression on the extracted characteristic vector after the acquired image pixels are normalized, wherein the one-dimensional characteristic numerical value is a continuous variation value and directly represents the current emotional pleasure degree. Then, the human face emotion is classified by the dynamically changed pleasure reference value and simultaneously the pleasure representation value is output to represent the pleasure state. The method and the device perform regression on the characteristics of the face to obtain a continuous value representing the face emotion pleasure degree, accurately reflect the face emotion pleasure degree through the size of the continuous value, and accurately judge the change state of the current user pleasure degree through dynamically setting the pleasure degree reference values of different faces.
Drawings
FIG. 1 is a schematic overall flow diagram of the present invention;
FIG. 2 is a schematic diagram of the feature vector dimension reduction process of the present invention;
FIG. 3 is a schematic diagram of the process of determining a pleasure state using a dynamic pleasure criterion according to the present invention.
Detailed Description
The invention will be further elucidated and described with reference to the embodiments and drawings of the specification:
referring to fig. 1, the method for judging the emotional pleasure of the human face of the present invention includes:
step S1: preprocessing a face image: adjusting the size of the intercepted face image and normalizing the image pixels;
in the step, the preprocessing mainly comprises two sub-steps, namely size adjustment and normalization of the face image. Since the regression network used in the subsequent steps requires a fixed input size, the length and width of the input image are adjusted to a fixed 224 × 224 size in consideration of the computational performance of the mobile terminal. The value range of the pixel value of the image is 0 to 255, and the value range of the joyfulness output by the model is 0 to 1, so that the pixel value of the face image is normalized first, that is, the pixel values in the RGB3 channels are respectively normalized to be within an interval of 0 to 1. The size of the color face image after the preprocessing is 224 × 3, and the value range is between 0 and 1.
Step S2: and extracting the features of the preprocessed face image.
After the preprocessing of the face image is finished, feature extraction is carried out on the image by using a convolution network in the patent, and feature vectors of the face image are calculated, wherein the convolution network is mainly constructed on the basis of a bottleneck module in a mobilenetV3 network. And calculating by a convolution network to obtain a feature vector with the size of 4 x 576, wherein the feature vector comprises the visual features of the human face image. The parameters of the convolutional network used in feature extraction are obtained by training with a large number of face images with labels, and the numerical range of the labels is 0 to 1.
Step S3: and carrying out global average pooling and dimension reduction on the extracted feature vectors.
After the feature extraction of the face image is completed, the obtained face feature has high dimensionality and is difficult to directly carry out feature regression. Therefore, global average pooling is performed on the feature vectors, that is, an average value is calculated for the feature vectors of each channel, and the average value is used for representing the feature of each channel, so that the dimension of the feature is reduced by 16 times, and the feature vector with 576 dimensions is obtained.
Step S4: and performing characteristic regression on the reduced-dimension characteristic vector, and outputting a one-dimensional characteristic numerical value.
The feature regression includes:
referring to fig. 2, the new feature vector is regressed by using the regression network, the regression network includes two fully connected layers including a first layer and a second layer connected to the first layer, and the first layer uses a linear rectification function to independently calculate the output of the feature vector in each dimension of the feature vector; the second layer uses a linear rectification function to calculate one-dimensional eigenvalues of the incoming eigenvectors.
Wherein the input feature vector in the first layer is 576 dimensions, and the output feature vector is 128 dimensions; and after a 128-dimensional feature vector is input in the second layer, a one-dimensional feature numerical value is output, the output one-dimensional feature numerical value represents the judgment of the joy of the model on the face emotion, and the higher the numerical value is, the higher the joy is. Parameters in the regression network are obtained by training a large number of face images with labels, the label information is a floating point number representing the pleasure degree of the face images, and the value range is 0-1.
Step S5: and dynamically determining a pleasure degree reference value, judging the pleasure degree state according to the determined pleasure degree reference value and the one-dimensional characteristic numerical value, and outputting a pleasure degree representation value.
Referring to fig. 3, after the regression of the face features is completed to obtain the one-dimensional feature continuous value representing the pleasure degree, the range of the value of the pleasure degree is also different in consideration of the difference of the face of each user, so that the invention can make more accurate judgment on the pleasure degree states of different users by dynamically determining the reference value of the pleasure degree. After the one-dimensional characteristic value representing the pleasure degree of the current face is obtained, whether the emotion module completes initialization or not is judged, namely whether the pleasure degree information in the previous second is recorded or not is judged, if the initialization is not completed, further judgment is not carried out, and the recording of the pleasure degree data is carried out and then quitting is carried out. And if the initialization is finished, calculating the average value of the joyfulness in the previous second, then judging whether a joyfulness reference value exists or not, judging the current joyfulness state by using the reference value if the reference value exists, and judging whether the average value of the joyfulness in the previous second can be set as the reference value if the reference value does not exist. The finally output information comprises a one-dimensional characteristic continuous value representing the pleasure degree and a discrete value representing the pleasure degree state, namely the states of happiness, sadness, excitement and the like.
Specifically, after the one-dimensional characteristic value is obtained, whether a system or a face emotion module is initialized is judged; if yes, calculating the average value of the pleasure degree in the previous second, then judging whether a pleasure degree reference value for the person exists, and if yes, calculating the difference between the current average value of the pleasure degree and the current reference value of the pleasure degree; and comparing the difference with a preset pleasure degree grade threshold value, judging whether the current pleasure degree grade belongs to the state of excitement and the like, and then outputting a pleasure degree value and a pleasure degree state.
If the average value of the joyfulness is in a reasonable range, setting the average value as the reference value of the joyfulness; the pleasure level reference value is not set if one of the two conditions is not satisfied.
And of course, after the judgment system or the face emotion module is not initialized, the calculation of the pleasure degree value in the next time interval is directly quitted, and the operations are repeated.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the protection scope of the present invention, although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (7)

1. The method for judging the emotional pleasure degree of the face is characterized by comprising the following steps of:
step S1: preprocessing a face image: adjusting the size of the intercepted face image and normalizing the image pixels;
step S2: extracting the features of the preprocessed face image;
step S3: performing global average pooling on the extracted feature vectors, and performing dimension reduction processing;
step S4: performing characteristic regression on the reduced-dimension characteristic vector, and outputting a one-dimensional characteristic numerical value;
step S5: and dynamically determining a pleasure degree reference value, judging the pleasure degree state according to the determined pleasure degree reference value and the one-dimensional characteristic numerical value, and outputting a pleasure degree representation value.
2. The method for judging emotional pleasure of human faces according to claim 1, wherein the binning pixels are normalized by mapping pixel values in RGB channels of an image to numerical values between 0 and 1, respectively.
3. The method for judging the emotional pleasure of the human face according to claim 1, wherein the extracting of the features of the human face image comprises: and performing feature extraction on the image by using a convolution network, and calculating a feature vector of the face image, wherein the feature vector comprises visual feature information of the face image.
4. The method for judging emotional pleasure of a human face according to claim 3, wherein the global tie pooling includes calculating an average value in each channel of the feature vectors extracted in step S2, and taking the average value in each channel as a new feature vector.
5. The method for judging emotional pleasure of a human face according to claim 1 or 3, wherein the feature regression includes:
performing regression on the new feature vector by using the regression network, wherein the regression network comprises two fully-connected layers including a first layer and a second layer connected with the first layer, and the first layer independently calculates feature vector output in each dimension in the feature vector by using a linear rectification function; the second layer uses a linear rectification function to calculate one-dimensional eigenvalues of the incoming eigenvectors.
6. The method for judging the emotional pleasure of the human face according to claim 5, wherein the parameters of the regression network are obtained by training the image labeled with the emotional value of the human face image, and the labeled value ranges from 0 to 1.
7. The method for judging emotional pleasure of a face according to claim 6, wherein the method of outputting the pleasure representation value includes: after the one-dimensional feature value is obtained in step S4, it is determined whether a pleasure value in a previous time period of the same face is recorded, and if the pleasure value in the previous time period is not detected, no output is made; and if so, judging whether a pleasure degree reference value is recorded, if so, judging the current state of the pleasure degree and the average value of the pleasure degree values in the previous time period, and outputting the average value as the pleasure degree value.
CN202010971949.1A 2020-09-16 2020-09-16 Face emotion pleasure degree judging method Active CN112115847B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010971949.1A CN112115847B (en) 2020-09-16 2020-09-16 Face emotion pleasure degree judging method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010971949.1A CN112115847B (en) 2020-09-16 2020-09-16 Face emotion pleasure degree judging method

Publications (2)

Publication Number Publication Date
CN112115847A true CN112115847A (en) 2020-12-22
CN112115847B CN112115847B (en) 2024-05-17

Family

ID=73802753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010971949.1A Active CN112115847B (en) 2020-09-16 2020-09-16 Face emotion pleasure degree judging method

Country Status (1)

Country Link
CN (1) CN112115847B (en)

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005763A (en) * 2015-06-26 2015-10-28 李战斌 Face recognition method and face recognition system based on local feature information mining
CN106919903A (en) * 2017-01-19 2017-07-04 中国科学院软件研究所 A kind of continuous mood tracking based on deep learning of robust
US20170255864A1 (en) * 2016-03-05 2017-09-07 Panoramic Power Ltd. Systems and Methods Thereof for Determination of a Device State Based on Current Consumption Monitoring and Machine Learning Thereof
CN108460324A (en) * 2018-01-04 2018-08-28 上海孩子通信息科技有限公司 A method of child's mood for identification
CN109064217A (en) * 2018-07-16 2018-12-21 阿里巴巴集团控股有限公司 Method, apparatus and electronic equipment are determined based on the core body strategy of user gradation
CN109447001A (en) * 2018-10-31 2019-03-08 深圳市安视宝科技有限公司 A kind of dynamic Emotion identification method
CN109840485A (en) * 2019-01-23 2019-06-04 科大讯飞股份有限公司 A kind of micro- human facial feature extraction method, apparatus, equipment and readable storage medium storing program for executing
US20190205625A1 (en) * 2017-12-28 2019-07-04 Adobe Inc. Facial expression recognition utilizing unsupervised learning
CN110197107A (en) * 2018-08-17 2019-09-03 平安科技(深圳)有限公司 Micro- expression recognition method, device, computer equipment and storage medium
CN110555379A (en) * 2019-07-30 2019-12-10 华南理工大学 human face pleasure degree estimation method capable of dynamically adjusting features according to gender
WO2020020472A1 (en) * 2018-07-24 2020-01-30 Fundación Centro Tecnoloxico De Telecomunicacións De Galicia A computer-implemented method and system for detecting small objects on an image using convolutional neural networks
KR20200012355A (en) * 2018-07-27 2020-02-05 백석대학교산학협력단 Online lecture monitoring method using constrained local model and Gabor wavelets-based face verification process
CN110850049A (en) * 2019-08-15 2020-02-28 清华大学 Water quality monitoring and water sensory pleasure degree evaluation method
CN111127830A (en) * 2018-11-01 2020-05-08 奇酷互联网络科技(深圳)有限公司 Alarm method, alarm system and readable storage medium based on monitoring equipment
CN111191765A (en) * 2019-12-31 2020-05-22 华为技术有限公司 Emotional information processing method and device
US20200178888A1 (en) * 2017-07-28 2020-06-11 Osaka University Discernment of comfort/discomfort
CN111597884A (en) * 2020-04-03 2020-08-28 平安科技(深圳)有限公司 Facial action unit identification method and device, electronic equipment and storage medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105005763A (en) * 2015-06-26 2015-10-28 李战斌 Face recognition method and face recognition system based on local feature information mining
US20170255864A1 (en) * 2016-03-05 2017-09-07 Panoramic Power Ltd. Systems and Methods Thereof for Determination of a Device State Based on Current Consumption Monitoring and Machine Learning Thereof
CN106919903A (en) * 2017-01-19 2017-07-04 中国科学院软件研究所 A kind of continuous mood tracking based on deep learning of robust
US20200178888A1 (en) * 2017-07-28 2020-06-11 Osaka University Discernment of comfort/discomfort
US20190205625A1 (en) * 2017-12-28 2019-07-04 Adobe Inc. Facial expression recognition utilizing unsupervised learning
CN108460324A (en) * 2018-01-04 2018-08-28 上海孩子通信息科技有限公司 A method of child's mood for identification
CN109064217A (en) * 2018-07-16 2018-12-21 阿里巴巴集团控股有限公司 Method, apparatus and electronic equipment are determined based on the core body strategy of user gradation
WO2020020472A1 (en) * 2018-07-24 2020-01-30 Fundación Centro Tecnoloxico De Telecomunicacións De Galicia A computer-implemented method and system for detecting small objects on an image using convolutional neural networks
KR20200012355A (en) * 2018-07-27 2020-02-05 백석대학교산학협력단 Online lecture monitoring method using constrained local model and Gabor wavelets-based face verification process
CN110197107A (en) * 2018-08-17 2019-09-03 平安科技(深圳)有限公司 Micro- expression recognition method, device, computer equipment and storage medium
CN109447001A (en) * 2018-10-31 2019-03-08 深圳市安视宝科技有限公司 A kind of dynamic Emotion identification method
CN111127830A (en) * 2018-11-01 2020-05-08 奇酷互联网络科技(深圳)有限公司 Alarm method, alarm system and readable storage medium based on monitoring equipment
CN109840485A (en) * 2019-01-23 2019-06-04 科大讯飞股份有限公司 A kind of micro- human facial feature extraction method, apparatus, equipment and readable storage medium storing program for executing
CN110555379A (en) * 2019-07-30 2019-12-10 华南理工大学 human face pleasure degree estimation method capable of dynamically adjusting features according to gender
CN110850049A (en) * 2019-08-15 2020-02-28 清华大学 Water quality monitoring and water sensory pleasure degree evaluation method
CN111191765A (en) * 2019-12-31 2020-05-22 华为技术有限公司 Emotional information processing method and device
CN111597884A (en) * 2020-04-03 2020-08-28 平安科技(深圳)有限公司 Facial action unit identification method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN112115847B (en) 2024-05-17

Similar Documents

Publication Publication Date Title
CN108491835B (en) Two-channel convolutional neural network for facial expression recognition
CN110188343B (en) Multi-mode emotion recognition method based on fusion attention network
CN109829443B (en) Video behavior identification method based on image enhancement and 3D convolution neural network
CN111639544B (en) Expression recognition method based on multi-branch cross-connection convolutional neural network
CN108304823B (en) Expression recognition method based on double-convolution CNN and long-and-short-term memory network
CN112784763B (en) Expression recognition method and system based on local and overall feature adaptive fusion
CN111563417B (en) Pyramid structure convolutional neural network-based facial expression recognition method
CN110930297B (en) Style migration method and device for face image, electronic equipment and storage medium
CN112800903B (en) Dynamic expression recognition method and system based on space-time diagram convolutional neural network
CN109389076B (en) Image segmentation method and device
CN109033994A (en) A kind of facial expression recognizing method based on convolutional neural networks
CN111860046B (en) Facial expression recognition method for improving MobileNet model
CN105956570B (en) Smiling face's recognition methods based on lip feature and deep learning
CN110175526A (en) Dog Emotion identification model training method, device, computer equipment and storage medium
CN111126389A (en) Text detection method and device, electronic equipment and storage medium
CN113780249B (en) Expression recognition model processing method, device, equipment, medium and program product
CN111428671A (en) Face structured information identification method, system, device and storage medium
CN112418059A (en) Emotion recognition method and device, computer equipment and storage medium
CN116645716A (en) Expression Recognition Method Based on Local Features and Global Features
CN114764941A (en) Expression recognition method and device and electronic equipment
CN110610138A (en) Facial emotion analysis method based on convolutional neural network
CN111160327B (en) Expression recognition method based on lightweight convolutional neural network
CN112016592A (en) Domain adaptive semantic segmentation method and device based on cross domain category perception
CN112580527A (en) Facial expression recognition method based on convolution long-term and short-term memory network
CN112115847A (en) Method for judging face emotion joyfulness

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant