CN113361304A - Service evaluation method and device based on expression recognition and storage equipment - Google Patents

Service evaluation method and device based on expression recognition and storage equipment Download PDF

Info

Publication number
CN113361304A
CN113361304A CN202010152681.9A CN202010152681A CN113361304A CN 113361304 A CN113361304 A CN 113361304A CN 202010152681 A CN202010152681 A CN 202010152681A CN 113361304 A CN113361304 A CN 113361304A
Authority
CN
China
Prior art keywords
image sequence
expression
expression image
service
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010152681.9A
Other languages
Chinese (zh)
Inventor
左骏
张冲
黄建强
熊贤剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhuofan Information Technology Co ltd
Original Assignee
Shanghai Zhuofan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhuofan Information Technology Co ltd filed Critical Shanghai Zhuofan Information Technology Co ltd
Priority to CN202010152681.9A priority Critical patent/CN113361304A/en
Publication of CN113361304A publication Critical patent/CN113361304A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0282Rating or review of business operators or products

Abstract

The invention provides a service evaluation method, a device and a storage device based on expression recognition, which are used for acquiring a first expression image sequence and a second expression image sequence of a service object; inputting the first expression image sequence and the second expression image sequence into the trained convolutional neural network to respectively obtain classification results of the first expression image sequence and the second expression image sequence; acquiring corresponding emotion score values based on the classification results of the first expression image sequence and the second expression image sequence; the difference value between the score value of the first expression image sequence and the score value of the second expression image sequence is calculated, and service is evaluated according to the difference value.

Description

Service evaluation method and device based on expression recognition and storage equipment
Technical Field
The invention relates to the technical field of electronic government affairs, in particular to a service evaluation method and device based on expression recognition and storage equipment.
Background
With the continuous promotion of the reform of 'putting, managing and serving', in order to promote the improvement of the service quality of government affairs windows, the artificial intelligence technology is needed to be used for making an evaluation reference for the service of workers.
However, the existing expression recognition has the problems that the data set is not standard enough, the model needs to be improved and the like, and the recognition accuracy is poor. Meanwhile, the tendency of the traditional seven-category expression recognition is obvious, and the traditional seven-category expression recognition is not beneficial to being directly used for service evaluation. Therefore, it is necessary to create a method with high classification accuracy and suitable for service evaluation.
Disclosure of Invention
The invention aims to provide a service evaluation method, a service evaluation device and storage equipment based on expression recognition, so as to solve the problems in the prior art.
In order to achieve the above object, an aspect of the present invention provides a service evaluation method based on expression recognition, including the following steps:
acquiring a first expression image sequence and a second expression image sequence of a service object;
inputting the first expression image sequence and the second expression image sequence into the trained convolutional neural network to respectively obtain classification results of the first expression image sequence and the second expression image sequence;
acquiring corresponding emotion score values based on the classification results of the first expression image sequence and the second expression image sequence;
and calculating a difference value between the score value of the first expression image sequence and the score value of the second expression image sequence, and evaluating the service according to the difference value.
Further, the expression classification result of the convolutional neural network comprises a plurality of emotion levels, and the emotion levels are used for evaluating negative and/or positive emotions of the service object.
Further, the method also comprises the following steps:
setting a score value of the emotion grade according to the emotion grade;
and calculating the difference of the score values between the first expression image sequence and the second expression image sequence according to the score values of the emotion grades.
Further, the method also comprises the following steps:
when the difference is greater than a first threshold, the service is rated as very satisfactory;
when the difference value is less than or equal to a first threshold value and greater than a second threshold value, the service is evaluated to be satisfactory;
when the difference value is less than or equal to a second threshold value and greater than a third threshold value, the service evaluation is general;
when the difference is less than or equal to a third threshold and greater than a fourth threshold, the service is evaluated as being relatively unsatisfactory;
when the difference is less than or equal to a fourth threshold, the service is rated as being very unsatisfactory.
Further, the training process of the convolutional neural network comprises:
processing the data set pictures by using the generated countermeasure network, improving the resolution and the labeling quality, and expanding the expression category labels from seven categories to eight categories;
inputting a facial expression data set into a convolutional neural network for training, wherein the convolutional neural network selects the characteristics of the expression according to the classification label;
and (4) training a deep learning classifier by adopting an extreme learning machine to finish the classification of the facial expression data.
Further, a stochastic pooling algorithm is adopted by a pooling layer of the convolutional neural network to perform stochastic sampling calculation on the feature points.
On the other hand, the invention also provides a service evaluation device based on expression recognition, which comprises:
the image acquisition module is used for acquiring a first expression image sequence and a second expression image sequence of the service object;
the image scoring module is used for classifying and identifying the first expression image sequence and the second expression image sequence by adopting a trained convolutional neural network so as to obtain corresponding emotion grade scores;
and the image comparison module is used for calculating the difference value between the score value of the first expression image sequence and the score value of the second expression image sequence and evaluating the service according to the difference value.
In another aspect, the present invention further provides a storage device, where the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to perform the steps in the service evaluation method based on expression recognition.
The invention provides a service evaluation method, a device and a storage device based on expression recognition, which are used for acquiring a first expression image sequence and a second expression image sequence of a service object; inputting the first expression image sequence and the second expression image sequence into the trained convolutional neural network to respectively obtain classification results of the first expression image sequence and the second expression image sequence; acquiring corresponding emotion score values based on the classification results of the first expression image sequence and the second expression image sequence; the difference value between the score value of the first expression image sequence and the score value of the second expression image sequence is calculated, and service is evaluated according to the difference value.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart of a service evaluation method based on expression recognition according to an embodiment of the present invention.
Fig. 2 is a network structure diagram of a convolutional neural network according to an embodiment of the present invention.
Fig. 3 is a service evaluation apparatus based on expression recognition according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. A
In the following description, reference is made to "some embodiments" which describe a subset of all possible embodiments, but it is understood that "some embodiments" may be the same subset or different subsets of all possible embodiments, and may be combined with each other without conflict.
Where similar language of "first/second" appears in the specification, the following description is added, and where reference is made to the term "first \ second \ third" merely for distinguishing between similar items and not for indicating a particular ordering of items, it is to be understood that "first \ second \ third" may be interchanged both in particular order or sequence as appropriate, so that embodiments of the application described herein may be practiced in other than the order illustrated or described herein.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
A service evaluation method, apparatus, and storage device based on emotion recognition according to an embodiment of the present invention will be described below with reference to the accompanying drawings, and first, a service evaluation method based on emotion recognition according to an embodiment of the present invention will be described with reference to the accompanying drawings.
Fig. 1 is a flowchart of a service evaluation method based on expression recognition according to an embodiment of the present invention. As shown in fig. 1, the evaluation method includes the steps of:
step S1 acquires a first expression image sequence and a second expression image sequence of the service object.
The service object refers to a real existing person in real life, in some embodiments, the scene where the person is located is a service window, such as a government affair, a bank and the like, and the person may be a man, a woman, an old person, a child and the like to receive window services.
Those skilled in the art should understand that the first expression image sequence and the second expression image sequence may be acquired by any device in the prior art, and it is within the scope of the present invention as long as the expression images of the target can be acquired, regardless of the devices such as a video camera, a mobile phone, an infrared thermal imaging device, and a sensor used in the specific acquisition.
In some embodiments, the first expression image sequence and the second expression image sequence can be acquired through a camera, a plurality of expression images are generated by shooting facial expressions of the service object, three expression images can be selected in order to eliminate scoring influences caused by different facial features of different objects, and the expression images can be acquired through a video screenshot mode or can be directly generated through a photo mode.
Step S2 inputs the first expression image sequence and the second expression image sequence into the trained convolutional neural network, and obtains classification results of the first expression image sequence and the second expression image sequence, respectively.
As will be appreciated by those skilled in the art, a convolutional neural network, which is one of the artificial neural networks, is an algorithmic mathematical model that mimics the behavior of an animal neural network for distributed parallel information processing. The network achieves the aim of processing information by adjusting the mutual connection relationship among a large number of nodes in the network depending on the complexity of the system. It is within the scope of the present invention that the model training is performed to establish a relationship between the sampling parameter information and the attribute parameter information of the preset model object.
Those skilled in the art will appreciate that the training data set used in training the model is a facial expression data set, which may include test charts, public verification charts, and private verification charts. The facial expression data set has a plurality of expression categories, each expression category corresponding to a corresponding expression label.
In some embodiments, the FER2013 facial expression data set is used as a training data set, a confrontation network is generated to classify and label the FER2013 facial expression data set in order to perfect the expression data set, and expression labels are classified into eight labels of anger, disgust, fear, hurting, surprise, neutrality, laugh and smile.
Specifically, the objective function for generating the countermeasure network is:
Figure BDA0002403002080000061
wherein, x: expression picture, z: noise input to generator, g (z): picture generated by generator, d (x): the judger judges the probability of whether the expression picture is a classification label, D (x): the judger judges the probability of whether the expression picture is a classification label, D (g (x)): the judger judges the probability of whether the picture generated by the generator is a classification label.
In the above objective function, the determiner aims to find the maximum D of the objective function. The stronger the ability of D, the larger D (x), the smaller D (G (x)), and the larger V (D, G), and thus D is the maximum of the objective function. Purpose of generator G: the goal is to make the generated picture closer to the real picture better, i.e. G when V (D, G) is the smallest.
From the above objective function, it can be seen that the solution of the objective function is divided into two steps:
(1) fixing a generator G, and obtaining a discriminator D through the maximization of an objective function;
(2) and fixing the judger D, and minimizing the generator G through an objective function.
And repeating the iteration cycle, and finally obtaining a generator G with the maximized objective function, so that the judging result of the generator G cannot be detected by the judger D, and classifying and labeling the expression labels.
In some embodiments, when performing model training by using a convolutional neural network, firstly, the extraction of the expression features is performed on the classification according to the training data set, and then the extreme learning machine is used as a linear classifier of the convolutional neural network to complete the classification of the facial expression data. And the pooling layer of the convolutional neural network adopts a random pooling algorithm to perform random sampling calculation on the feature points.
Fig. 2 is a network structure diagram of a convolutional neural network according to an embodiment of the present invention.
Specifically, as shown in fig. 2, the convolutional neural network used in the present invention is formed by continuously stacking convolutional layers, pooling layers, normalization layers, and full-link layers, and accumulating them together, and finally using a classifier to implement classification. In the vessel, C1、C2Denotes that the 1 st and 3 rd layers are each a convolutional layer, S1、S2Indicating that layers 2 and 4 are pooling layers and FC indicating that layer 5 is a fully connected layer.
Further, in FIG. 1, C1、C2Are convolutional layers, typically the first convolutional layer input images are directly connected. If the input is an expression image matrix X of n × n, a matrix K with a convolution kernel size of m × m, and a convolution kernel moving step F is 1, the size of the feature map after convolution calculation is (n-m +1) × (n-m +1), and the convolution calculation formula can be expressed as:
Figure BDA0002403002080000071
wherein the content of the first and second substances,
Figure BDA0002403002080000072
a feature map representing the output of the previous layer,
Figure BDA0002403002080000073
for the bias of the jth feature map of layer l, f (x) is an activation function, and usually the convolution-calculated values are further processed by the activation function, where the activation function is a ReLU function and the expression is:
f(x)=max(0,X)
in some embodiments of the present invention, the pooling layer employs a random pooling algorithm, and the random pooling algorithm performs probability-based sampling with the weight of each element in the pooling domain as a probability, and the calculating method is as follows:
Figure BDA0002403002080000081
Figure BDA0002403002080000084
furthermore, the invention adopts an extreme learning machine as a classifier of the convolutional neural network, determines the correlation between the hidden layer unit output and the classification result by a method of calculating information gain, and determines the number of the hidden layer units by setting a threshold value through the Otsu method.
Suppose there are m classes t (t)1,t2,…,tm),
Figure BDA0002403002080000082
And each hidden layer unit is used for classifying the information brought by the hidden layer units as follows:
Figure BDA0002403002080000083
wherein, p (t)i) Is the classification probability of each class; d represents the number of output region divisions of the hidden layer unit (since the output is a continuous value of 0 to 1, it is necessary to divide the output into segmented regions); p (h)k,l) Outputting the probability of belonging to the l section division area for the k hidden layer unit; p (t)i|hk,l) The state of the unit is represented as h in the hidden layerk,lWhen the classification result is tiThe probability of (c).
Step S3 obtains a corresponding emotion score value based on the classification result of the first expression image sequence and the second expression image sequence.
The expression classification result of the convolutional neural network comprises a plurality of emotion levels, and the emotion levels are used for evaluating negative and/or positive emotions of the service object. Specifically, after the classification results of the first expression image sequence and the second expression image sequence are obtained, the classification labels of the first expression image sequence and the second expression image sequence are obtained. The classification labels are divided into 8 emotion grades from low to high according to the negative emotion to the positive emotion, and each emotion grade has a corresponding score. Wherein the vitality is-25 minutes, the aversion is-20 minutes, the fear is-15 minutes, the hurt is-10 minutes, the surprise is-5 minutes, the neutrality is 0 minutes, the laugh is 10 minutes and the smile is 20 minutes.
Step S4 calculates a difference between the score value of the first expression image sequence and the score value of the second expression image sequence, and evaluates the service according to the difference.
Wherein the service is rated as being highly satisfactory when the difference is greater than a first threshold; when the difference value is less than or equal to a first threshold value and greater than a second threshold value, the service is evaluated to be satisfactory; when the difference value is less than or equal to a second threshold value and greater than a third threshold value, the service evaluation is general; when the difference is less than or equal to a third threshold and greater than a fourth threshold, the service is evaluated as being relatively unsatisfactory; when the difference is less than or equal to a fourth threshold, the service is rated as being very unsatisfactory.
In some embodiments of the invention, if the difference is positive and the absolute value exceeds 20 points, the service is rated as very satisfactory; if the absolute value is positive and exceeds 10 points, the service evaluation is satisfactory; if the difference is positive and the absolute value exceeds 10 points, the service evaluation is satisfactory; if the difference is negative and the absolute value exceeds the score of 20, the service is rated as very unsatisfactory. If the absolute value of the difference is within 10, the service evaluation is general.
As shown in fig. 3, the present invention further provides a service evaluation device based on expression recognition, including:
the image acquisition module 101 is used for acquiring a first expression image sequence and a second expression image sequence of a service object;
the image scoring module 102 is used for classifying and identifying the first expression image sequence and the second expression image sequence by adopting a trained convolutional neural network so as to obtain corresponding emotion grade scores;
and the image comparison module 103 is used for calculating a difference value between the score value of the first expression image sequence and the score value of the second expression image sequence, and evaluating the service according to the difference value.
Specifically, an embodiment of the present invention further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured as the above method.
The electronic device may be provided as a terminal, server, or other form of device.
Computer storage media for embodiments of the invention may employ any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or terminal. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
Although embodiments of the present invention have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present invention, and that variations, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A service evaluation method based on expression recognition is characterized by comprising the following steps:
acquiring a first expression image sequence and a second expression image sequence of a service object;
inputting the first expression image sequence and the second expression image sequence into the trained convolutional neural network to respectively obtain classification results of the first expression image sequence and the second expression image sequence;
acquiring corresponding emotion score values based on the classification results of the first expression image sequence and the second expression image sequence;
and calculating a difference value between the score value of the first expression image sequence and the score value of the second expression image sequence, and evaluating the service according to the difference value.
2. The service evaluation method based on expression recognition of claim 1, wherein the expression classification result of the convolutional neural network comprises a plurality of emotion levels, and the emotion levels are used for evaluating negative and/or positive emotions of the service object.
3. The service evaluation method based on expression recognition according to claim 2, further comprising:
setting a score value of the emotion grade according to the emotion grade;
and calculating the difference of the score values between the first expression image sequence and the second expression image sequence according to the score values of the emotion grades.
4. The service evaluation method based on expression recognition as claimed in claim 3, further comprising:
when the difference is greater than a first threshold, the service is rated as very satisfactory;
when the difference value is less than or equal to a first threshold value and greater than a second threshold value, the service is evaluated to be satisfactory;
when the difference value is less than or equal to a second threshold value and greater than a third threshold value, the service evaluation is general;
when the difference is less than or equal to a third threshold and greater than a fourth threshold, the service is evaluated as being relatively unsatisfactory;
when the difference is less than or equal to a fourth threshold, the service is rated as being very unsatisfactory.
5. The service evaluation method based on expression recognition according to claim 1, wherein the training process of the convolutional neural network comprises:
processing the data set pictures by using the generated countermeasure network, improving the resolution and the labeling quality, and expanding the expression category labels from seven categories to eight categories;
inputting a facial expression data set into a convolutional neural network for training, wherein the convolutional neural network selects the characteristics of the expression according to the classification label;
and (4) training a deep learning classifier by adopting an extreme learning machine to finish the classification of the facial expression data.
6. The service evaluation method based on expression recognition according to claim 5, wherein the pooling layer of the convolutional neural network adopts a stochastic pooling algorithm to perform stochastic sampling computation on the feature points.
7. A service evaluation device based on expression recognition is characterized by comprising:
the image acquisition module is used for acquiring a first expression image sequence and a second expression image sequence of the service object;
the image scoring module is used for classifying and identifying the first expression image sequence and the second expression image sequence by adopting a trained convolutional neural network so as to obtain corresponding emotion grade scores;
and the image comparison module is used for calculating the difference value between the score value of the first expression image sequence and the score value of the second expression image sequence and evaluating the service according to the difference value.
8. A storage device, wherein the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor to execute the steps of the service evaluation method based on expression recognition according to any one of claims 1 to 6.
CN202010152681.9A 2020-03-06 2020-03-06 Service evaluation method and device based on expression recognition and storage equipment Pending CN113361304A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010152681.9A CN113361304A (en) 2020-03-06 2020-03-06 Service evaluation method and device based on expression recognition and storage equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010152681.9A CN113361304A (en) 2020-03-06 2020-03-06 Service evaluation method and device based on expression recognition and storage equipment

Publications (1)

Publication Number Publication Date
CN113361304A true CN113361304A (en) 2021-09-07

Family

ID=77524113

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010152681.9A Pending CN113361304A (en) 2020-03-06 2020-03-06 Service evaluation method and device based on expression recognition and storage equipment

Country Status (1)

Country Link
CN (1) CN113361304A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007043712A (en) * 2005-08-02 2007-02-15 Agere Systems Inc Phase locked loop with scaled damping capacitor
CN109784977A (en) * 2018-12-18 2019-05-21 深圳壹账通智能科技有限公司 Service methods of marking, device, computer equipment and storage medium
CN110322643A (en) * 2019-07-08 2019-10-11 上海卓繁信息技术股份有限公司 Intelligent government affairs services system and its application
KR20190119863A (en) * 2018-04-13 2019-10-23 인하대학교 산학협력단 Video-based human emotion recognition using semi-supervised learning and multimodal networks
CN110472592A (en) * 2019-08-20 2019-11-19 中国工商银行股份有限公司 Service satisfaction evaluation method and device based on Expression Recognition

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007043712A (en) * 2005-08-02 2007-02-15 Agere Systems Inc Phase locked loop with scaled damping capacitor
KR20190119863A (en) * 2018-04-13 2019-10-23 인하대학교 산학협력단 Video-based human emotion recognition using semi-supervised learning and multimodal networks
CN109784977A (en) * 2018-12-18 2019-05-21 深圳壹账通智能科技有限公司 Service methods of marking, device, computer equipment and storage medium
CN110322643A (en) * 2019-07-08 2019-10-11 上海卓繁信息技术股份有限公司 Intelligent government affairs services system and its application
CN110472592A (en) * 2019-08-20 2019-11-19 中国工商银行股份有限公司 Service satisfaction evaluation method and device based on Expression Recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
A. GUDI: "Deep learning based FACS Action Unit occurrence and intensity estimation", 2015 11TH IEEE INTERNATIONAL CONFERENCE AND WORKSHOPS ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG), pages 1 - 5 *
吴晨晖: "基于图卷积神经网络的人脸表情识别研究", 中国优秀硕士学位论文全文数据库 (信息科技辑), no. 9, pages 138 - 982 *

Similar Documents

Publication Publication Date Title
CN110070067B (en) Video classification method, training method and device of video classification method model and electronic equipment
CN109583501B (en) Method, device, equipment and medium for generating image classification and classification recognition model
CN109117777B (en) Method and device for generating information
US20210151034A1 (en) Methods and systems for multimodal content analytics
CN111582409B (en) Training method of image tag classification network, image tag classification method and device
CN109033994B (en) Facial expression recognition method based on convolutional neural network
CN110232340B (en) Method and device for establishing video classification model and video classification
CN113159283A (en) Model training method based on federal transfer learning and computing node
CN110472693B (en) Image processing and classifying method and system
CN111814817A (en) Video classification method and device, storage medium and electronic equipment
WO2023088174A1 (en) Target detection method and apparatus
JP2022548187A (en) Target re-identification method and device, terminal and storage medium
CN112819024B (en) Model processing method, user data processing method and device and computer equipment
CN111694954B (en) Image classification method and device and electronic equipment
Duman et al. Distance estimation from a monocular camera using face and body features
CN114328942A (en) Relationship extraction method, apparatus, device, storage medium and computer program product
CN113255557A (en) Video crowd emotion analysis method and system based on deep learning
CN113239883A (en) Method and device for training classification model, electronic equipment and storage medium
Sumalakshmi et al. Fused deep learning based Facial Expression Recognition of students in online learning mode
KR101334858B1 (en) Automatic butterfly species identification system and method, and portable terminal having automatic butterfly species identification function using the same
CN113361304A (en) Service evaluation method and device based on expression recognition and storage equipment
CN115063858A (en) Video facial expression recognition model training method, device, equipment and storage medium
CN115205573A (en) Image processing method, device and equipment
CN115700790A (en) Method, apparatus and storage medium for object attribute classification model training
CN116912920B (en) Expression recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination