CN107463888B - Face emotion analysis method and system based on multi-task learning and deep learning - Google Patents

Face emotion analysis method and system based on multi-task learning and deep learning Download PDF

Info

Publication number
CN107463888B
CN107463888B CN201710602227.7A CN201710602227A CN107463888B CN 107463888 B CN107463888 B CN 107463888B CN 201710602227 A CN201710602227 A CN 201710602227A CN 107463888 B CN107463888 B CN 107463888B
Authority
CN
China
Prior art keywords
face
learning
analysis
emotion
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710602227.7A
Other languages
Chinese (zh)
Other versions
CN107463888A (en
Inventor
简仁贤
杨闵淳
张为义
许世焕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Emotibot Technologies Ltd
Original Assignee
Emotibot Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Emotibot Technologies Ltd filed Critical Emotibot Technologies Ltd
Priority to CN201710602227.7A priority Critical patent/CN107463888B/en
Publication of CN107463888A publication Critical patent/CN107463888A/en
Application granted granted Critical
Publication of CN107463888B publication Critical patent/CN107463888B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a human face emotion analysis method and system based on multitask learning and deep learning, which comprises the steps of learning a convolutional layer of a preset analysis task in a human face library by using a convolutional neural network to obtain a human face analysis model; acquiring a face image to be analyzed, analyzing the face image to be analyzed by using a face detection algorithm, and extracting a face region in the face image to be analyzed; and predicting the face image to be analyzed by using the face analysis model to obtain emotion information corresponding to each face area in the face image to be analyzed. The invention applies the concept of multi-task learning to the convolutional neural network, so that various analysis tasks related to the human face can be identified by using the same analysis model, thereby reducing the size of the analysis model and shortening the identification time. In addition, the invention describes different convolution layers aiming at different parts of the human face, so that the task of each convolution layer is single and precise, and better recognition effect can be achieved.

Description

Face emotion analysis method and system based on multi-task learning and deep learning
Technical Field
The invention belongs to the technical field of computer vision and man-machine interaction image processing, and particularly relates to a human face emotion analysis method based on multi-task learning and deep learning.
Background
With the development of computer vision technology, more and more related technologies are applied to the context of human-computer interaction, especially emotional operations in recent years. Through the automatic face emotion recognition system, the emotion of people can be understood more easily, so that people can quickly and directly obtain feedback on the emotion of a user through a computer, and the quality of human-computer interaction is improved.
In the conventional human face emotion recognition system, a common method is to extract a low-order feature value from a human face image after the human face image is captured, and then train a classifier in a machine learning manner to recognize emotion categories (such as happy, sad, surprised, etc.). Or the behavior on the face is described through the recognition of the face Action units (Action units), and the category of the face emotion is judged by the excitation degree of the combination of the Action units. On the other hand, the human face emotion is difficult to be described by using several single emotion categories, so that previous research projects different emotion categories onto a continuous emotion space (such as V-A space), and more emotion categories can be described by using values of two coordinate axes of Valence (evaluation value) and Arousal (Arousal degree). Only the human face action unit and the emotion space are widely used for the task of human face emotion recognition, and good effects can be achieved, but the human face action unit and the emotion space are still limited.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides the face emotion analysis method and the face emotion analysis system based on multi-task learning and deep learning, so that the size of an analysis model is reduced, the face emotion recognition time is shortened, and a better recognition effect is achieved.
A face emotion analysis method based on multitask learning and deep learning comprises the following steps:
training a face analysis model: learning a convolutional layer of a preset analysis task in a face library by using a convolutional neural network to obtain a face analysis model;
a human face region extraction step: acquiring a face image to be analyzed, analyzing the face image to be analyzed by using a face detection algorithm, and extracting a face region in the face image to be analyzed;
a prediction step: and predicting the face image to be analyzed by using the face analysis model to obtain emotion information corresponding to each face area in the face image to be analyzed.
Preferably, in the step of training the face analysis model, the analysis task includes a face attribute; when the convolution layer of the face attributes is learned, the face attributes to be learned are marked, the characteristic values of different face parts are captured according to the face parts corresponding to the face attributes to be learned on the face area, and the first convolution layer is obtained through learning.
Preferably, in the step of training the face analysis model, the analysis task includes a face action; when learning the convolution layer of the face action, presetting different sub-action convolution layers according to different face actions, adding corresponding sub-action convolution layers in the output layer of the first convolution layer according to the face part, and learning to obtain a second convolution layer.
Preferably, in the step of training the face analysis model, the analysis task includes an emotion space; and when learning the convolutional layer of the emotion space, selecting a second convolutional layer according to the evaluation value and the arousal degree, adding a preset sub-emotion convolutional layer into an output layer of the selected second convolutional layer, and learning to obtain a third convolutional layer.
Preferably, in the step of training the face analysis model, an emotion space numerical target is preset, and the emotion space numerical target is reached by learning the third convolution layer and a preset full connection layer, so as to obtain the face analysis model.
A face emotion analysis system based on multitask learning and deep learning is suitable for the face emotion analysis method based on the multitask learning and the deep learning, and comprises the following steps:
training a face analysis module: learning a convolutional layer of a preset analysis task in a face library by using a convolutional neural network to obtain a face analysis model;
the face detection module: the face analysis device is used for acquiring a face image to be analyzed;
a face analysis module: the face detection module is used for analyzing the face image to be analyzed by using a face detection algorithm and extracting a face area in the face image to be analyzed;
a prediction module: the face analysis model is used for predicting the face image to be analyzed to obtain emotion information corresponding to each face area in the face image to be analyzed.
Preferably, the training face analysis module comprises a face attribute unit, and the analysis task comprises a face attribute; the face attribute unit is used for learning the convolution layer of the face attribute, and comprises a mark to-be-learned face attribute, and the face attribute unit captures characteristic values of different face parts according to the face part corresponding to the to-be-learned face attribute on the face area, and learns to obtain a first convolution layer.
Preferably, the training face analysis module comprises a face action unit, and the analysis task comprises a face action; the face action unit is used for learning the convolution layer of the face action, and comprises different sub-action convolution layers preset according to different face actions, corresponding sub-action convolution layers are added to the output layer of the first convolution layer according to the face part, and a second convolution layer is obtained through learning.
Preferably, the training face analysis module comprises an emotion space unit, and the analysis task comprises an emotion space; the training face analysis module is used for learning the convolutional layer of the emotion space, and comprises the steps of selecting a second convolutional layer according to the evaluation value and the arousing degree, adding a preset sub-emotion convolutional layer into an output layer of the selected second convolutional layer, and learning to obtain a third convolutional layer.
Preferably, the training face analysis module is preset with an emotion space numerical target, and the emotion space numerical target is reached by learning the third convolution layer and a preset full connection layer, so as to obtain the face analysis model.
According to the technical scheme, the face emotion analysis method and system based on the multitask learning and the deep learning, provided by the invention, apply the concept of the multitask learning to the convolutional neural network, so that various analysis tasks (face attribute identification, face action unit identification and face emotion space value estimation) related to the face can be identified by using the same analysis model, and thus the size of the analysis model can be reduced and the identification time can be shortened. In addition, the invention describes different convolution layers aiming at different parts of the human face, so that the task of each convolution layer is single and precise, and better recognition effect can be achieved. In addition, because the framework is designed based on multi-task learning and has the characteristic of sharing the characteristic value, the framework also has good extensibility, namely, other human face related tasks can be easily added to the framework according to different requirements.
Drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below. Throughout the drawings, like elements or portions are generally identified by like reference numerals. In the drawings, elements or portions are not necessarily drawn to scale.
Fig. 1 is a processing block diagram of a face detection module and a face analysis module in the embodiment.
The processing block diagram of face emotion analysis in prediction in the embodiment of fig. 2.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings. The following examples are only for illustrating the technical solutions of the present invention more clearly, and therefore are only examples, and the protection scope of the present invention is not limited thereby. It is to be noted that, unless otherwise specified, technical or scientific terms used herein shall have the ordinary meaning as understood by those skilled in the art to which the invention pertains.
Example (b):
a face emotion analysis method based on multitask learning and deep learning comprises the following steps:
training a face analysis model: learning a convolutional layer of a preset analysis task in a face library by using a convolutional neural network to obtain a face analysis model;
a human face region extraction step: acquiring a face image to be analyzed, analyzing the face image to be analyzed by using a face detection algorithm, and extracting a face region in the face image to be analyzed; the face image or image is obtained by the camera, and the extracted face area can be used as the input of the face analysis model to analyze different types (such as attributes, action units and emotion space values).
A prediction step: and predicting the face image to be analyzed by using the face analysis model to obtain emotion information corresponding to each face area in the face image to be analyzed.
In the traditional emotion analysis method, in the process of training an emotion space numerical value estimation model, low-order characteristic values (such as SIFT) of images of a face are extracted firstly, and then the model is trained, but the model trained in the way has no intuitive description on the reason of obtaining the emotion space numerical value, so that the invention provides a deep learning-based framework, the hierarchical model training process is utilized, the numerical value of the emotion space can be predicted through a face action unit, the action changes of eyes, eyebrows and mouth on the face can be known through the face action unit, the numerical value of the emotion space can be predicted through the changes, a result which is easier to understand can be obtained, and further the emotion space numerical value can be utilized to predict diversified emotions.
The invention provides a deep learning-based framework, and utilizes a multi-task learning concept to enable an analysis model (taking a convolutional neural network as a core) to sequentially learn characteristic values aiming at human face attributes, human face actions and emotion spaces, so that in a prediction stage, a single analysis model can be used for simultaneously processing the identification and prediction of the three analysis tasks. After extracting a face region from an image or video by using a face detection algorithm (the face detection may be a face detection model trained by any deep learning/machine learning model), further analysis is performed on the advanced attributes of the face region, and a flowchart is briefly described in fig. 1. After the face region is obtained, hierarchical training is further performed on three related analysis tasks of the face region (i.e., after the deep model of the first analysis task is trained, the model parameters of the first task are frozen, and then, the output of the first model is used to perform training of the second model network, and so on until the models of the three analysis tasks are trained, the hierarchical training architecture is briefly described in fig. 2). In the invention, a multi-task learning mode is utilized to integrate a plurality of face-related tasks into the same analysis model, a three-stage training process is provided, the face attributes, the face action unit and the emotion space value are respectively learned on a convolutional network, the characteristic values of three different tasks are estimated, so that in a prediction stage, the results of the three tasks can be obtained by using a single model at one time, and the training stage comprises the following steps:
the human face attribute analysis model is also the basis of the whole architecture, and as the whole model is used for processing the task taking the human face as the center, the characteristic values learned by the underlying convolutional neural network are similar according to the previous research, the characteristic values can be shared by the tasks related to different human faces, so that the size of the model can be reduced, and a plurality of related tasks can be processed by using a single model. In the step of training the face analysis model, the analysis task comprises face attributes; when the convolution layer of the face attribute is learned, the face attribute to be learned (such as small eyes, thick eyebrows, thick lips and the like) is marked, the face parts (such as eyes, eyebrows, mouths and the like) corresponding to the face attribute to be learned on the face region are respectively captured by different convolution layers, the first convolution layer is learned, the convolution layers learned according to the face parts have more representativeness and discrimination (specific blocks can be strengthened aiming at different attributes), and the first convolution layer can be used for guiding the learning process of an action unit.
The recognition or strength prediction of the human face action unit has been widely discussed, and in recent years, the deep learning based method has good performance. In the invention, in the step of training the face analysis model, the analysis task comprises a face action; when learning the convolution layer of the face action, presetting different sub-action convolution layers according to different face actions, adding corresponding sub-action convolution layers in the output layer of the first convolution layer according to the face part, and learning to obtain a second convolution layer. By using the marking data of the face action unit and based on the first convolution layer of the face attribute, a new sub-action convolution layer is added in the output layer of the face action unit to optimize the learning process of the face action unit identification, so that the characteristic value of the face action unit is learned based on the face part (namely, the characteristic expression capability of the face action unit can be strengthened by adding the corresponding sub-action convolution layer aiming at different face action units), thus the characteristics of each part can be captured, and the identification power of the action unit is strengthened. And the whole learning process is end-to-end, so that no additional information (such as human face key points) is required to be added.
In the last stage, in the part of predicting the value of the emotion space, because the face action unit and the emotion space can be used to identify several same emotion categories, there is some relationship between the two models, and based on the observation that the face action unit is extended to describe the value of the emotion space, the face action unit can be used to identify more kinds of emotions (such as eyes closed and smile), and because the face action unit is used to describe the behavior of different parts of the face, the value of the emotion space can be more reasonably interpreted. In the invention, in the step of training the face analysis model, the analysis task comprises an emotion space; and when learning the convolutional layer of the emotion space, selecting a second convolutional layer according to the evaluation value and the arousal degree, adding a preset sub-emotion convolutional layer into an output layer of the selected second convolutional layer, and learning to obtain a third convolutional layer. In the step of training the face analysis model, an emotion space numerical value target is preset, and the emotion space numerical value target is achieved by learning the third convolution layer and a preset full connection layer, so that the face analysis model is obtained. Through the emotion space value marking data, after selecting the relevant action units with discriminative power respectively aiming at the value (evaluation value) and the Arousal (Arousal degree) by utilizing the second convolutional layer as the basis, the convolutional layer aiming at the emotion space is newly added in the neural network, and through learning the emotion space convolutional layer and the additional full connection layer, the target of predicting the emotion space value can be achieved, and further, the types of multiple emotions can be identified.
In the present invention, the face attribute unit, the face action unit and the emotion space unit are based on convolution layers and can be replaced by different convolution blocks.
The invention applies the concept of multi-task learning to the convolutional neural network, so that various analysis tasks (face attribute recognition, face action unit recognition and face emotion space value estimation) related to the face can achieve the aim by using the same model, thereby reducing the size of the model and shortening the recognition time. In addition, the invention describes different convolution layers aiming at different parts of the human face, so that the task of each convolution layer is single and precise, and better recognition effect can be achieved. In addition, because the framework is designed based on multi-task learning and has the characteristic of sharing the characteristic value, the framework also has good extensibility, namely, other human face related tasks can be easily added to the framework according to different requirements.
An application scenario of the present invention is given below, for example, a plurality of cameras may be installed in a store, so as to clearly capture images of an attendant and a customer as a main target, and after the cameras capture images of faces, emotion changes of the attendant and the customer may be analyzed in the background through the architecture of the present invention, so as to know interaction conditions between the attendant and the customer, synthesize emotion changes of both parties, and refer to a rule between a predetermined emotion and a service satisfaction (for example, if the customer feels angry all the time, the service quality of the attendant is deducted), so that the satisfaction of the customer for the service of the attendant may be automatically predicted.
A face emotion analysis system based on multitask learning and deep learning, as shown in fig. 1 and 2, is applicable to the face emotion analysis method based on multitask learning and deep learning, and includes:
training a face analysis module: learning a convolutional layer of a preset analysis task in a face library by using a convolutional neural network to obtain a face analysis model;
the face detection module: the face analysis device is used for acquiring a face image to be analyzed;
a face analysis module: the face detection module is used for analyzing the face image to be analyzed by using a face detection algorithm and extracting a face area in the face image to be analyzed;
a prediction module: the face analysis model is used for predicting the face image to be analyzed to obtain emotion information corresponding to each face area in the face image to be analyzed.
The training face analysis module comprises a face attribute unit, and the analysis task comprises face attributes; the face attribute unit is used for learning the convolution layer of the face attribute, and comprises a mark to-be-learned face attribute, and the face attribute unit captures characteristic values of different face parts according to the face part corresponding to the to-be-learned face attribute on the face area, and learns to obtain a first convolution layer.
The training face analysis module comprises a face action unit, and the analysis task comprises face action; the face action unit is used for learning the convolution layer of the face action, and comprises different sub-action convolution layers preset according to different face actions, corresponding sub-action convolution layers are added to the output layer of the first convolution layer according to the face part, and a second convolution layer is obtained through learning.
The training face analysis module comprises an emotion space unit, and the analysis task comprises an emotion space; the training face analysis module is used for learning the convolutional layer of the emotion space, and comprises the steps of selecting a second convolutional layer according to the evaluation value and the arousing degree, adding a preset sub-emotion convolutional layer into an output layer of the selected second convolutional layer, and learning to obtain a third convolutional layer.
The training face analysis module is preset with an emotion space numerical value target, and the emotion space numerical value target is achieved by learning the third convolution layer and the preset full connection layer, so that the face analysis model is obtained.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the present invention, and they should be construed as being included in the following claims and description.

Claims (4)

1. A face emotion analysis method based on multitask learning and deep learning is characterized by comprising the following steps:
training a face analysis model: learning a convolutional layer of a preset analysis task in a face library by using a convolutional neural network to obtain a face analysis model;
a human face region extraction step: acquiring a face image to be analyzed, analyzing the face image to be analyzed by using a face detection algorithm, and extracting a face region in the face image to be analyzed;
a prediction step: predicting the face image to be analyzed by using the face analysis model to obtain emotion information corresponding to each face area in the face image to be analyzed;
in the step of training the face analysis model, the analysis task comprises face attributes; when the convolution layer of the face attributes is learned, marking the face attributes to be learned, capturing the characteristic values of different face parts according to the face parts corresponding to the face attributes to be learned on the face area, and learning to obtain a first convolution layer;
in the step of training the face analysis model, the analysis task comprises a face action; when learning the convolution layer of the face action, presetting different sub-action convolution layers according to different face actions, adding corresponding sub-action convolution layers in the output layer of the first convolution layer according to the face part, and learning to obtain a second convolution layer;
in the step of training the face analysis model, the analysis task comprises an emotion space; and when learning the convolutional layer of the emotion space, selecting a second convolutional layer according to the evaluation value and the arousal degree, adding a preset sub-emotion convolutional layer into an output layer of the selected second convolutional layer, and learning to obtain a third convolutional layer.
2. The face emotion analysis method based on multitask learning and deep learning according to claim 1,
in the step of training the face analysis model, an emotion space numerical value target is preset, and the emotion space numerical value target is achieved by learning the third convolution layer and a preset full connection layer, so that the face analysis model is obtained.
3. A face emotion analysis system based on multitask learning and deep learning, which is applicable to the face emotion analysis method based on multitask learning and deep learning in claim 1, and comprises the following steps:
training a face analysis module: learning a convolutional layer of a preset analysis task in a face library by using a convolutional neural network to obtain a face analysis model;
the face detection module: the face analysis device is used for acquiring a face image to be analyzed;
a face analysis module: the face detection module is used for analyzing the face image to be analyzed by using a face detection algorithm and extracting a face area in the face image to be analyzed;
a prediction module: the face analysis model is used for predicting the face image to be analyzed to obtain emotion information corresponding to each face area in the face image to be analyzed;
the training face analysis module comprises a face attribute unit, and the analysis task comprises face attributes; the face attribute unit is used for learning the convolution layer of the face attribute, and comprises a mark to-be-learned face attribute, and the face attribute unit captures characteristic values of different face parts according to the face part corresponding to the to-be-learned face attribute on the face region and learns to obtain a first convolution layer;
the training face analysis module comprises a face action unit, and the analysis task comprises face action; the face action unit is used for learning the convolution layer of the face action, and comprises different sub-action convolution layers preset according to different face actions, corresponding sub-action convolution layers are added into the output layer of the first convolution layer according to the face part, and a second convolution layer is obtained through learning;
the training face analysis module comprises an emotion space unit, and the analysis task comprises an emotion space; the training face analysis module is used for learning the convolutional layer of the emotion space, and comprises the steps of selecting a second convolutional layer according to the evaluation value and the arousing degree, adding a preset sub-emotion convolutional layer into an output layer of the selected second convolutional layer, and learning to obtain a third convolutional layer.
4. The face emotion analysis system based on multitask learning and deep learning according to claim 3,
the training face analysis module is preset with an emotion space numerical value target, and the emotion space numerical value target is achieved by learning the third convolution layer and the preset full connection layer, so that the face analysis model is obtained.
CN201710602227.7A 2017-07-21 2017-07-21 Face emotion analysis method and system based on multi-task learning and deep learning Active CN107463888B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710602227.7A CN107463888B (en) 2017-07-21 2017-07-21 Face emotion analysis method and system based on multi-task learning and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710602227.7A CN107463888B (en) 2017-07-21 2017-07-21 Face emotion analysis method and system based on multi-task learning and deep learning

Publications (2)

Publication Number Publication Date
CN107463888A CN107463888A (en) 2017-12-12
CN107463888B true CN107463888B (en) 2020-05-19

Family

ID=60543945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710602227.7A Active CN107463888B (en) 2017-07-21 2017-07-21 Face emotion analysis method and system based on multi-task learning and deep learning

Country Status (1)

Country Link
CN (1) CN107463888B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI811605B (en) * 2020-12-31 2023-08-11 宏碁股份有限公司 Method and system for mental index prediction

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108182389B (en) * 2017-12-14 2021-07-30 华南师范大学 User data processing method based on big data and deep learning and robot system
CN108491764B (en) * 2018-03-05 2020-03-17 竹间智能科技(上海)有限公司 Video face emotion recognition method, medium and device
CN108846343B (en) * 2018-06-05 2022-05-13 北京邮电大学 Multi-task collaborative analysis method based on three-dimensional video
TR201818738A2 (en) * 2018-12-06 2019-02-21 Turkcell Teknoloji Arastirma Ve Gelistirme Anonim Sirketi A SYSTEM PROVIDING REAL-TIME EMOTION ANALYSIS IN IMMEDIATE MESSAGING APPLICATIONS
CN109685023A (en) * 2018-12-27 2019-04-26 深圳开立生物医疗科技股份有限公司 A kind of facial critical point detection method and relevant apparatus of ultrasound image
HK1256665A2 (en) * 2018-12-28 2019-09-27 K11 Group Ltd Service reminder system and method based on user's instant location
CN109934173B (en) * 2019-03-14 2023-11-21 腾讯科技(深圳)有限公司 Expression recognition method and device and electronic equipment
CN110035271B (en) * 2019-03-21 2020-06-02 北京字节跳动网络技术有限公司 Fidelity image generation method and device and electronic equipment
CN110287792B (en) * 2019-05-23 2021-05-04 华中师范大学 Real-time analysis method for learning state of students in classroom in natural teaching environment
CN110751016B (en) * 2019-09-02 2023-04-11 合肥工业大学 Facial movement unit double-flow feature extraction method for emotional state monitoring
CN113139439B (en) * 2021-04-06 2022-06-10 广州大学 Online learning concentration evaluation method and device based on face recognition
CN113642541B (en) * 2021-10-14 2022-02-08 环球数科集团有限公司 Face attribute recognition system based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8582807B2 (en) * 2010-03-15 2013-11-12 Nec Laboratories America, Inc. Systems and methods for determining personal characteristics
CN105654049A (en) * 2015-12-29 2016-06-08 中国科学院深圳先进技术研究院 Facial expression recognition method and device
CN106529402A (en) * 2016-09-27 2017-03-22 中国科学院自动化研究所 Multi-task learning convolutional neural network-based face attribute analysis method
CN106650621A (en) * 2016-11-18 2017-05-10 广东技术师范学院 Deep learning-based emotion recognition method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8582807B2 (en) * 2010-03-15 2013-11-12 Nec Laboratories America, Inc. Systems and methods for determining personal characteristics
CN105654049A (en) * 2015-12-29 2016-06-08 中国科学院深圳先进技术研究院 Facial expression recognition method and device
CN106529402A (en) * 2016-09-27 2017-03-22 中国科学院自动化研究所 Multi-task learning convolutional neural network-based face attribute analysis method
CN106650621A (en) * 2016-11-18 2017-05-10 广东技术师范学院 Deep learning-based emotion recognition method and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI811605B (en) * 2020-12-31 2023-08-11 宏碁股份有限公司 Method and system for mental index prediction

Also Published As

Publication number Publication date
CN107463888A (en) 2017-12-12

Similar Documents

Publication Publication Date Title
CN107463888B (en) Face emotion analysis method and system based on multi-task learning and deep learning
CN111370020B (en) Method, system, device and storage medium for converting voice into lip shape
Wang et al. Deep learning-based human motion recognition for predictive context-aware human-robot collaboration
CN112232293B (en) Image processing model training method, image processing method and related equipment
CN109086873B (en) Training method, recognition method and device of recurrent neural network and processing equipment
Lao et al. Automatic video-based human motion analyzer for consumer surveillance system
KR20200145827A (en) Facial feature extraction model learning method, facial feature extraction method, apparatus, device, and storage medium
CN108280426B (en) Dark light source expression identification method and device based on transfer learning
CN109063626B (en) Dynamic face recognition method and device
GB2585261A (en) Methods for generating modified images
CN112383824A (en) Video advertisement filtering method, device and storage medium
Lu Segmentation improved label propagation for semi-supervised anomaly detection in fused magnesia furnace process
CN113139452A (en) Method for detecting behavior of using mobile phone based on target detection
Sai Image classification for user feedback using Deep Learning Techniques
CN116188846A (en) Equipment fault detection method and device based on vibration image
CN114245232A (en) Video abstract generation method and device, storage medium and electronic equipment
CN114038034A (en) Virtual face selection model training method, online video psychological consultation privacy protection method, storage medium and psychological consultation system
JP4449483B2 (en) Image analysis apparatus, image analysis method, and computer program
Zhang et al. Prediction of human actions in assembly process by a spatial-temporal end-to-end learning model
CN113557522A (en) Image frame pre-processing based on camera statistics
CN111428813A (en) Panel number identification and pressing method based on deep learning
CN111476131A (en) Video processing method and device
Gavade et al. Facial Expression Recognition in Videos by learning Spatio-Temporal Features with Deep Neural Networks
Kumar et al. Machine Learning Approach for Gesticulation System Using Hand
CN112101331B (en) Security video fusion scheduling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant