CN108564007B - Emotion recognition method and device based on expression recognition - Google Patents

Emotion recognition method and device based on expression recognition Download PDF

Info

Publication number
CN108564007B
CN108564007B CN201810255799.7A CN201810255799A CN108564007B CN 108564007 B CN108564007 B CN 108564007B CN 201810255799 A CN201810255799 A CN 201810255799A CN 108564007 B CN108564007 B CN 108564007B
Authority
CN
China
Prior art keywords
expression
emotion
recognition
recognition result
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810255799.7A
Other languages
Chinese (zh)
Other versions
CN108564007A (en
Inventor
陈虎
谷也
盛卫华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Academy Of Robotics
Original Assignee
Shenzhen Academy Of Robotics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Academy Of Robotics filed Critical Shenzhen Academy Of Robotics
Priority to CN201810255799.7A priority Critical patent/CN108564007B/en
Publication of CN108564007A publication Critical patent/CN108564007A/en
Application granted granted Critical
Publication of CN108564007B publication Critical patent/CN108564007B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Abstract

The invention discloses an emotion recognition method and device based on expression recognition. The method comprises the steps of collecting images of a recognized person and recording collection time, processing by using a face recognition algorithm and outputting a face recognition result, inputting the face recognition result into a deep neural network for processing to obtain an expression recognition result, recording the expression recognition result and the corresponding collection time as expression data into an expression database in sequence, and acquiring a plurality of expression data from the expression database for analysis so as to obtain an emotion recognition result of the recognized person; the apparatus includes a memory for storing a program and a processor for loading the program to perform the emotion recognition method based on expression recognition. The invention enables the robot to efficiently perceive and analyze the emotion and emotion of the human, can perform human-computer interaction in a more efficient manner, and improves the interaction experience on sense. The invention is applied to the technical field of image recognition processing.

Description

Emotion recognition method and device based on expression recognition
Technical Field
The invention relates to the technical field of image recognition processing, in particular to an emotion recognition method and device based on expression recognition.
Background
Emotion recognition refers to researching an automatic, efficient and accurate system to recognize the state of facial expressions, and further understanding the emotional state of a person through facial expression information, such as happiness, sadness, surprise, anger and the like. The research has important application value in the aspects of human-computer interaction, artificial intelligence and the like, and is an important research topic in the fields of computer vision, pattern recognition, emotion calculation and the like at present.
In the technical field of human-computer interaction, especially in the field of robotics, it is generally necessary to analyze human emotion to perform effective human-computer interaction, which brings sensory improvement to the interaction experience of users. The existing face recognition and expression recognition technology can extract a face part from an image, recognize the expression of the face and output the expression type of the face as a result, but the application of the face recognition and expression recognition technology as an algorithm is far lagged behind the research and the due application value of the face recognition and expression recognition technology in the man-machine interaction process is not expressed.
Disclosure of Invention
In order to solve the above technical problems, a first object of the present invention is to provide an emotion recognition method based on expression recognition, and a second object is to provide an emotion recognition apparatus based on expression recognition.
The first technical scheme adopted by the invention is as follows:
an emotion recognition method based on expression recognition comprises the following steps:
s1, collecting an image of a person to be identified and recording the collection time, and processing the image of the person to be identified by using a face recognition algorithm so as to output a face recognition result;
s2, inputting a face recognition result into a depth neural network which is trained in advance for processing, and thus obtaining an expression recognition result, wherein the expression recognition result comprises an expression type;
s3, taking the expression recognition result and the corresponding acquisition time as expression data, and sequentially recording the expression data into an expression database;
and S4, acquiring a plurality of expression data from the expression database, and analyzing according to the plurality of expression data to obtain an emotion recognition result of the recognized person.
Further, the deep neural network is pre-trained by:
pre-training the deep neural network by using an ImageNet data set;
and fine-tuning the deep neural network by utilizing an improved fer-2013 data set, wherein the improved fer-2013 data set is a data set formed by expanding a face image obtained by crawling from the Internet on the basis of the fer-2013 data set.
Further, the face image obtained by crawling from the internet is a face image containing glasses.
Further, the face recognition result is a video stream, and the step S2 specifically includes:
s201, enabling the face recognition result to be at the moment tiAnd time tiPrevious time ti-1、ti-2And ti-3Respectively corresponding frames are input into a pre-trained deep neural network for processing, so that the time t is outputi、ti-1、ti-2And ti-3Respectively corresponding to the identification results of the undetermined expressions, wherein i is the serial number of the moment;
s202, carrying out weighted summation on the identification results of the undetermined expressions by using a weighted summation judgment method so as to obtain weighted summation results, and obtaining time t according to the weighted summation resultsiThe expression recognition result of (1).
Further, the weighted sum determination method specifically includes:
recording the identification result of each undetermined expression as a result
Figure BDA0001608914940000021
Wherein, i is the serial number of the corresponding time;
the averaged result is calculated using the following formula:
Figure BDA0001608914940000022
wherein X is an expression type mark, i is a serial number of a corresponding moment, k is a summation serial number,
Figure BDA0001608914940000023
is a weighted summation result;
if it is not
Figure BDA0001608914940000024
Then at time tiTaking the corresponding undetermined expression recognition result as the time t required to be obtainediIn contrast, at the previously determined time ti-1As the time t to be obtainediThe expression recognition result of (1).
Further, the step S4 specifically includes:
s401a, obtaining a plurality of expression data continuously collected in the same time period from an expression database;
s402a, judging whether the expression data correspond to the same expression type, and if so, taking the expression type as an emotion recognition result.
Further, the expression types include happy, sad, angry, surprised and neutral, and the step S4 is followed by the following steps:
s5a, if the emotion recognition result is sad, sending information for soothing the mood of the recognized person, and inquiring whether the recognized person requests to play the soothing music;
s6a, if the emotion recognition result is angry, sending information for prompting the identified person to have a good mood, and inquiring whether the identified person requests to play light music;
s7a. obtain the request of the identified person and play the corresponding music according to the request.
Further, the step S4 specifically includes:
s401b, obtaining a plurality of expression data continuously collected in the same time period from an expression database;
s402b, searching a score corresponding to each expression data in a preset expression score table, and summing the acquired times of the expression data in the time period and the corresponding scores as weights to obtain an emotion score;
s403b, searching the emotion grade corresponding to the emotion score in a preset emotion score table, and taking the emotion grade as an emotion recognition result.
Further, the emotion ratings include good, general and poor, and the step S4 is followed by the steps of:
s5b, if the emotion recognition result is good, sending information for agreeing with the recognized person;
s6b, if the emotion recognition result is general, sending information for encouraging the recognized person;
and S7b, if the emotion recognition result is poor, sending information for caring the recognized person.
The second technical scheme adopted by the invention is as follows:
an emotion recognition apparatus based on expression recognition, comprising:
a memory for storing at least one program;
a processor, configured to load the at least one program to execute the emotion recognition method based on expression recognition according to the first technical aspect.
The invention has the beneficial effects that:
the expression recognition technology is applied to emotion recognition, and can be applied to the automation field of robots and the like, so that the robots can efficiently sense and analyze emotions and emotions of people, man-machine interaction can be performed between the robots in a more efficient mode, and interaction experience on senses is improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
Example 1
In this embodiment, an emotion recognition method based on expression recognition, as shown in fig. 1, includes the following steps:
s1, collecting an image of a person to be identified and recording the collection time, and processing the image of the person to be identified by using a face recognition algorithm so as to output a face recognition result;
s2, inputting a face recognition result into a depth neural network which is trained in advance for processing, and thus obtaining an expression recognition result, wherein the expression recognition result comprises an expression type;
s3, taking the expression recognition result and the corresponding acquisition time as expression data, and sequentially recording the expression data into an expression database;
and S4, acquiring a plurality of expression data from the expression database, and analyzing according to the plurality of expression data to obtain an emotion recognition result of the recognized person.
In step S1, a camera may be used to capture an image of the identified person in a manner that takes a single picture or takes a video. The face recognition algorithm may be dlib and other algorithms, which can recognize and extract the face part in the image of the recognized person, and can recognize a single photo and a video stream.
In step S2, the deep neural network may use Vgg-Net16, which has an expression recognition capability after being trained in advance, and is capable of recognizing a facial expression in a face recognition result and outputting a corresponding expression type as an expression recognition result. The expression types that the deep neural network can recognize include happiness, sadness, surprise, anger, neutrality and the like, and can be determined by the training mode of the deep neural network. The deep neural network, particularly the convolutional neural network, can extract the deep features of the image and can accurately output the expression recognition result.
In step S3, the expression database records expression data in the form of a time axis, that is, the expression recognition result and the collection time are stored correspondingly. The expression database is established so that a plurality of expression data can be integrated for analysis in step S4, so that the emotion recognition result for the recognized person is more accurate.
The emotion recognition method applies the expression recognition technology to emotion recognition, can be applied to the automation field of robots and the like, enables the robots to efficiently perceive and analyze the emotion and emotion of people, enables man-machine interaction between the robots to be carried out in a more efficient mode, and improves the interaction experience on sense organs.
Further as a preferred embodiment, the deep neural network is pre-trained by:
pre-training the deep neural network by using an ImageNet data set;
and fine-tuning the deep neural network by utilizing an improved fer-2013 data set, wherein the improved fer-2013 data set is a data set formed by expanding a face image obtained by crawling from the Internet on the basis of the fer-2013 data set.
Further preferably, the face image crawled from the internet is a face image with glasses.
To make the deep neural network more suitable for the method of the present invention, Vgg-Net16 can be used as the deep neural network, which is pre-trained (pre-training) with ImageNet data set on Vgg-Net16 and then fine-tuned (fine-tune) with the improved fer-2013 data set. During training, the following parameters may preferably be used: the batch is 64, the learning rate is 0.01, and the result of the step 40000 of iteration tends to be stable.
In order to make the deep neural network more suitable for the method, an improved fer-2013 data set is used to replace a traditional fer-2013 data set in the training process of the deep neural network. The traditional fer-2013 data set contains less data, and particularly lacks a face image with glasses, which affects the applicability of the trained deep neural network. To expand the fer-2013 dataset, new face images, especially glasses-worn face images, may be crawled from the internet and added to the fer-2013 dataset to improve the fer-2013 dataset.
The face image in the improved fer-2013 data set may be preprocessed before the deep neural network is trained with the improved fer-2013 data set, including flipping, rotating, expanding, gray-level transforming, size-adjusting, and image-calibrating, and the image may be subtracted from a mean value, for example (104, 117, 124), to be normalized, followed by dlib for face detection and face segmentation, and then grayed to adjust the image size to 96.
Further as a preferred embodiment, the face recognition result is a video stream, and the step S2 specifically includes:
s201, enabling the face recognition result to be at the moment tiAnd time tiPrevious time ti-1、ti-2And ti-3Respectively corresponding frames are input into a pre-trained deep neural network for processing, so that the time t is outputi、ti-1、ti-2And ti-3Respectively corresponding to the identification results of the undetermined expressions, wherein i is the serial number of the moment;
s202, carrying out weighted summation on the identification results of the undetermined expressions by using a weighted summation judgment method so as to obtain weighted summation results, and obtaining time t according to the weighted summation resultsiThe expression recognition result of (1).
If the face recognition algorithm identifies a video stream in step S1, the output face recognition result will also be in the form of a video stream, and will also be a picture containing a plurality of consecutive frames.
In the process of image acquisition of the identified person, image blurring is easily caused by the motion of the identified person or unclear imaging and the like, and incorrect identification is easily caused if only one frame of the video picture is identified independently.
In order to improve the accuracy of expression recognition for video pictures, the recognition results of consecutive frames of pictures can be comprehensively considered to determine the recognition result of a certain frame of picture.
Before step S201 is performed, the time t has been obtained and determinedi-1The gesture recognition result of the frame of (1).
In step S201, for time tiThe frames are used for expression recognition, and the time t can be continuously acquirediPrevious time ti-1、ti-2And ti-3Respectively corresponding to the frames. And then inputting the 4 frames into a deep neural network for recognition, and outputting 4 undetermined expression recognition results. Using a weighted sum judgment method to give weights to the 4 undetermined expression recognition results, and determining the time t according to the weighted sum resultiThe expression recognition result of (1).
Further, as a preferred embodiment, the weighted sum determining method specifically includes:
recording the identification result of each undetermined expression as a result
Figure BDA0001608914940000051
Wherein, i is the serial number of the corresponding time;
the averaged result is calculated using the following formula:
Figure BDA0001608914940000052
wherein X is an expression type mark, i is a serial number of a corresponding moment, k is a summation serial number,
Figure BDA0001608914940000061
is a weighted summation result;
if it is not
Figure BDA0001608914940000062
Then at time tiTaking the corresponding undetermined expression recognition result as the time t required to be obtainediIn contrast, at the previously determined time ti-1As the time t to be obtainediThe expression recognition result of (1).
The expression type X may be happy, sad, surprised, angry, neutral, and so on. And according to the weighted sum result
Figure BDA0001608914940000063
To determine the time t identified in the identification process of this timeiThe corresponding undetermined expression recognition result or the time t recognized in the previous recognition processi-1As the time t to be obtainediThe expression recognition result of (1).
In order to perform step S4 to obtain emotion recognition results according to expression data analysis, the present embodiment provides two specific implementation methods.
Further, as a preferred embodiment, the step S4 specifically includes:
s401a, obtaining a plurality of expression data continuously collected in the same time period from an expression database;
s402a, judging whether the expression data correspond to the same expression type, and if so, taking the expression type as an emotion recognition result.
Steps S401a and S402a are a first specific implementation method for obtaining the emotion recognition result from the expression data analysis. A time period, for example 5s, may first be set as the minimum time unit for analyzing the emotion. The method includes acquiring a plurality of expression data continuously acquired within 5s from an expression database, for example, within 5s of 20180101160000 and 20180101160005, or within 5s of 20171231120316 and 20171231120321, or a plurality of expression data acquired at a real-time acquisition time and within the previous 5s, and analyzing whether the expression data correspond to the same expression type. If a plurality of corresponding expression data in a 5s time period are all of the same expression type, for example, "happy", it is determined that the emotion recognition result corresponding to the 5s time period is "happy". The real-time analysis method can reduce the recognition error caused by the instant change of the emotion of the recognized person and improve the emotion recognition accuracy.
Further as a preferred embodiment, the expression types include happy, sad, angry, surprised and neutral, and the step S4 is followed by the following steps:
s5a, if the emotion recognition result is sad, sending information for soothing the mood of the recognized person, and inquiring whether the recognized person requests to play the soothing music;
s6a, if the emotion recognition result is angry, sending information for prompting the identified person to have a good mood, and inquiring whether the identified person requests to play light music;
s7a. obtain the request of the identified person and play the corresponding music according to the request.
After the first specific implementation method of obtaining the emotion recognition result according to the expression data analysis is performed, the corresponding emotion interaction steps S5a, S6a and S7a may be further performed. Steps S5a and S6a determine whether the emotion of the recognized person is sad or angry, and then issue related information and ask for a request. The sent information for soothing the identified mood and the information for prompting the identified mood can be voice information or character information, and when the embodiment is applied to the humanoid robot, the information can also be expressions made by the robot, for example, the information for soothing the identified mood can be smiling expressions of the robot, and the information for prompting the identified mood can be worried expressions of the robot.
Further, as a preferred embodiment, the step S4 specifically includes:
s401b, obtaining a plurality of expression data continuously collected in the same time period from an expression database;
s402b, searching a score corresponding to each expression data in a preset expression score table, and summing the acquired times of the expression data in the time period and the corresponding scores as weights to obtain an emotion score;
s403b, searching the emotion grade corresponding to the emotion score in a preset emotion score table, and taking the emotion grade as an emotion recognition result.
Steps S401b-S403b are a second specific implementation method for obtaining emotion recognition results according to expression data analysis. The time period may be one day, one week, one month, one year, etc. The method can comprehensively analyze the emotion of the identified person in a longer time period to obtain the approximate emotion level of the identified person in the time period.
The sentiment score may be calculated by the following formula:
Figure BDA0001608914940000071
where T is an emotion score, i is a number indicating an expression type, and for example, i is set to 0,1,2,3, and 4 respectively correspond to happy, sad, surprised, angry, and neutral expressions; qiAs a score of the corresponding expression type, QiCan be obtained by referring to the expression point table, a predetermined expression point table is shown in table 1, for example, Q1The score corresponding to the sad expression is 30 points; n is a radical ofiNumber of occurrences in time period for corresponding expression data, e.g. N2The number of times that an expression is collected within a time period for surprise; mkIs the total number of all corresponding expression data in the time period, which is equal to the sum of the times of all expression types collected in the time period, namely
Figure BDA0001608914940000072
TABLE 1
Expression type Score of
Happy 100
Neutral and surprised 60
Sadness and sorrow 30
Anger and anger 0
A preset emotion score table is shown in table 2, and after the emotion score T is calculated, the corresponding emotion level, that is, the emotion recognition result of the person recognized in the time period, can be searched according to table 2.
TABLE 2
Emotional rating Score of
Good effect 80-100
In general 60-80
Is poor 0-60
Further as a preferred embodiment, the emotion rating includes good, general and poor, and the step S4 is followed by the following steps:
s5b, if the emotion recognition result is good, sending information for agreeing with the recognized person;
s6b, if the emotion recognition result is general, sending information for encouraging the recognized person;
and S7b, if the emotion recognition result is poor, sending information for caring the recognized person.
After the second specific implementation method of obtaining the emotion recognition result according to the expression data analysis is performed, corresponding emotion interaction steps S5b, S6b, and S7b may be further performed. Steps S5b-S7b determine the emotion level of the recognized person, and then issue related information. The sent information can be voice information or character information, and when the embodiment is applied to a humanoid robot, the information can also be expressions made by the robot, such as smile expressions and worry expressions.
When the robot performs steps S5b-S7b, it may also be determined whether the identified person is in front of the robot and react differently. For example, when the recognized person is in front of the robot, the information is sent out in the form of voice or robot expression, and when the recognized person is not in front of the robot, the information is sent out by an instant messaging tool such as ring letter. For the identified person with a poor emotion identification result, information can be sent to the relatives of the identified person, and care is timely sought for the identified person.
Through the emotion interaction steps S5a-S6a and S5b-S7b, the interactivity of the robot can be further improved, the interaction experience and satisfaction of the identified people are improved, the robot is more intelligent and humanized, and the robot can more practically serve people.
Example 2
In this embodiment, an emotion recognition device based on expression recognition includes:
a memory for storing at least one program;
a processor for loading the at least one program to perform the emotion recognition method based on expression recognition as described in embodiment 1.
The memory and processor may be mounted on the robot, which also includes sensors and other necessary components.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. An emotion recognition method based on expression recognition is characterized by comprising the following steps:
s1, collecting an image of a person to be identified and recording the collection time, and processing the image of the person to be identified by using a face recognition algorithm so as to output a face recognition result;
s2, inputting a face recognition result into a depth neural network which is trained in advance for processing, and thus obtaining an expression recognition result, wherein the expression recognition result comprises an expression type;
s3, taking the expression recognition result and the corresponding acquisition time as expression data, and sequentially recording the expression data into an expression database;
s4, acquiring a plurality of expression data from an expression database, and analyzing according to the expression data to obtain an emotion recognition result of the recognized person;
the face recognition result is a video stream, and the step S2 specifically includes:
s201, enabling the face recognition result to be at the moment tiAnd time tiPrevious time ti-1、ti-2And ti-3Respectively corresponding frames are input into a pre-trained deep neural network for processing, so that the time t is outputi、ti-1、ti-2And ti-3Respectively corresponding to the identification results of the undetermined expressions, wherein i is the serial number of the moment;
s202, carrying out weighted summation on the identification results of the undetermined expressions by using a weighted summation judgment method so as to obtain weighted summation results, and obtaining time t according to the weighted summation resultsiExpression recognition result of;
The weighted sum judgment method specifically includes:
recording the identification result of each undetermined expression as a result
Figure FDA0003138006280000011
Wherein, i is the serial number of the corresponding time;
the averaged result is calculated using the following formula:
Figure FDA0003138006280000012
wherein X is an expression type mark, i is a serial number of a corresponding moment, k is a summation serial number,
Figure FDA0003138006280000013
is a weighted summation result;
if it is not
Figure FDA0003138006280000014
Then at time tiTaking the corresponding undetermined expression recognition result as the time t required to be obtainediIn contrast, at the previously determined time ti-1As the time t to be obtainediThe expression recognition result of (1).
2. The emotion recognition method based on expression recognition, as claimed in claim 1, wherein the deep neural network is pre-trained by the following steps:
pre-training the deep neural network by using an ImageNet data set;
and fine-tuning the deep neural network by utilizing an improved fer-2013 data set, wherein the improved fer-2013 data set is a data set formed by expanding a face image obtained by crawling from the Internet on the basis of the fer-2013 data set.
3. The emotion recognition method based on expression recognition, as claimed in claim 2, wherein the face image crawled from the internet is a face image with glasses.
4. The emotion recognition method based on expression recognition according to any one of claims 1 to 3, wherein the step S4 specifically includes:
s401a, obtaining a plurality of expression data continuously collected in the same time period from an expression database;
s402a, judging whether the expression data correspond to the same expression type, and if so, taking the expression type as an emotion recognition result.
5. The emotion recognition method based on expression recognition, wherein the expression types include happy, sad, angry, surprised and neutral, and the step S4 is followed by the following steps:
s5a, if the emotion recognition result is sad, sending information for soothing the mood of the recognized person, and inquiring whether the recognized person requests to play the soothing music;
s6a, if the emotion recognition result is angry, sending information for prompting the identified person to have a good mood, and inquiring whether the identified person requests to play light music;
s7a. obtain the request of the identified person and play the corresponding music according to the request.
6. The emotion recognition method based on expression recognition according to any one of claims 1 to 3, wherein the step S4 specifically includes:
s401b, obtaining a plurality of expression data continuously collected in the same time period from an expression database;
s402b, searching a score corresponding to each expression data in a preset expression score table, and summing the acquired times of the expression data in the time period and the corresponding scores as weights to obtain an emotion score;
s403b, searching the emotion grade corresponding to the emotion score in a preset emotion score table, and taking the emotion grade as an emotion recognition result.
7. The emotion recognition method based on expression recognition, as claimed in claim 6, wherein the emotion levels include good, general and poor, and said step S4 is followed by the steps of:
s5b, if the emotion recognition result is good, sending information for agreeing with the recognized person;
s6b, if the emotion recognition result is general, sending information for encouraging the recognized person;
and S7b, if the emotion recognition result is poor, sending information for caring the recognized person.
8. An emotion recognition apparatus based on expression recognition, comprising:
a memory for storing at least one program;
a processor for loading the at least one program to perform the method of emotion recognition based on expression recognition of any of claims 1-7.
CN201810255799.7A 2018-03-27 2018-03-27 Emotion recognition method and device based on expression recognition Active CN108564007B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810255799.7A CN108564007B (en) 2018-03-27 2018-03-27 Emotion recognition method and device based on expression recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810255799.7A CN108564007B (en) 2018-03-27 2018-03-27 Emotion recognition method and device based on expression recognition

Publications (2)

Publication Number Publication Date
CN108564007A CN108564007A (en) 2018-09-21
CN108564007B true CN108564007B (en) 2021-10-22

Family

ID=63533396

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810255799.7A Active CN108564007B (en) 2018-03-27 2018-03-27 Emotion recognition method and device based on expression recognition

Country Status (1)

Country Link
CN (1) CN108564007B (en)

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109670393B (en) * 2018-09-26 2023-12-19 平安科技(深圳)有限公司 Face data acquisition method, equipment, device and computer readable storage medium
CN109124658A (en) * 2018-09-27 2019-01-04 江苏银河数字技术有限公司 Mood detection system and method based on intelligent desk pad
CN109376621A (en) * 2018-09-30 2019-02-22 北京七鑫易维信息技术有限公司 A kind of sample data generation method, device and robot
CN111127830A (en) * 2018-11-01 2020-05-08 奇酷互联网络科技(深圳)有限公司 Alarm method, alarm system and readable storage medium based on monitoring equipment
CN109635680B (en) * 2018-11-26 2021-07-06 深圳云天励飞技术有限公司 Multitask attribute identification method and device, electronic equipment and storage medium
CN109784144A (en) * 2018-11-29 2019-05-21 北京邮电大学 A kind of kinship recognition methods and system
CN109684978A (en) * 2018-12-18 2019-04-26 深圳壹账通智能科技有限公司 Employees'Emotions monitoring method, device, computer equipment and storage medium
CN109829364A (en) * 2018-12-18 2019-05-31 深圳云天励飞技术有限公司 A kind of expression recognition method, device and recommended method, device
CN109829362A (en) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 Safety check aided analysis method, device, computer equipment and storage medium
CN109584579B (en) * 2018-12-21 2022-03-01 平安科技(深圳)有限公司 Traffic signal lamp control method based on face recognition and computer equipment
CN109800734A (en) * 2019-01-30 2019-05-24 北京津发科技股份有限公司 Human facial expression recognition method and device
CN109877806A (en) * 2019-03-05 2019-06-14 哈尔滨理工大学 Science and technology center's guide robot face device and control with mood resolving ability
CN110046580A (en) * 2019-04-16 2019-07-23 广州大学 A kind of man-machine interaction method and system based on Emotion identification
CN110046576A (en) * 2019-04-17 2019-07-23 内蒙古工业大学 A kind of method and apparatus of trained identification facial expression
CN110154757A (en) * 2019-05-30 2019-08-23 电子科技大学 The multi-faceted safe driving support method of bus
CN110472512B (en) * 2019-07-19 2022-08-05 河海大学 Face state recognition method and device based on deep learning
CN110427848B (en) * 2019-07-23 2022-04-12 京东方科技集团股份有限公司 Mental analysis system
CN110516593A (en) * 2019-08-27 2019-11-29 京东方科技集团股份有限公司 A kind of emotional prediction device, emotional prediction method and display device
CN111402523A (en) * 2020-03-24 2020-07-10 宋钰堃 Medical alarm system and method based on facial image recognition
CN112060080A (en) * 2020-07-31 2020-12-11 深圳市优必选科技股份有限公司 Robot control method and device, terminal equipment and storage medium
CN112347236A (en) * 2020-11-16 2021-02-09 友谊国际工程咨询股份有限公司 Intelligent engineering consultation method and system based on AI (Artificial Intelligence) calculated quantity and computer equipment thereof
CN112975963B (en) * 2021-02-23 2022-08-23 广东优碧胜科技有限公司 Robot action generation method and device and robot
CN113305856B (en) * 2021-05-25 2022-11-15 中山大学 Accompany type robot of intelligent recognition expression
CN113440122B (en) * 2021-08-02 2023-08-22 北京理工新源信息科技有限公司 Emotion fluctuation monitoring and identifying big data early warning system based on vital signs
CN116665273B (en) * 2023-05-31 2023-11-17 南京林业大学 Robot man-machine interaction method based on expression recognition and emotion quantitative analysis and calculation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104777910A (en) * 2015-04-23 2015-07-15 福州大学 Method and system for applying expression recognition to display device
CN206484561U (en) * 2016-12-21 2017-09-12 深圳市智能机器人研究院 A kind of intelligent domestic is accompanied and attended to robot
CN107341688A (en) * 2017-06-14 2017-11-10 北京万相融通科技股份有限公司 The acquisition method and system of a kind of customer experience

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4974788B2 (en) * 2007-06-29 2012-07-11 キヤノン株式会社 Image processing apparatus, image processing method, program, and storage medium
US9405962B2 (en) * 2012-08-14 2016-08-02 Samsung Electronics Co., Ltd. Method for on-the-fly learning of facial artifacts for facial emotion recognition
CN105335691A (en) * 2014-08-14 2016-02-17 南京普爱射线影像设备有限公司 Smiling face identification and encouragement system
CN105354527A (en) * 2014-08-20 2016-02-24 南京普爱射线影像设备有限公司 Negative expression recognizing and encouraging system
CN106123850B (en) * 2016-06-28 2018-07-06 哈尔滨工程大学 AUV prestowage multibeam sonars underwater topography surveys and draws modification method
CN107609458A (en) * 2016-07-20 2018-01-19 平安科技(深圳)有限公司 Emotional feedback method and device based on expression recognition
CN106650621A (en) * 2016-11-18 2017-05-10 广东技术师范学院 Deep learning-based emotion recognition method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104777910A (en) * 2015-04-23 2015-07-15 福州大学 Method and system for applying expression recognition to display device
CN206484561U (en) * 2016-12-21 2017-09-12 深圳市智能机器人研究院 A kind of intelligent domestic is accompanied and attended to robot
CN107341688A (en) * 2017-06-14 2017-11-10 北京万相融通科技股份有限公司 The acquisition method and system of a kind of customer experience

Also Published As

Publication number Publication date
CN108564007A (en) 2018-09-21

Similar Documents

Publication Publication Date Title
CN108564007B (en) Emotion recognition method and device based on expression recognition
CN110569795B (en) Image identification method and device and related equipment
Fisher et al. Speaker association with signal-level audiovisual fusion
CN110472512B (en) Face state recognition method and device based on deep learning
CN109117952B (en) Robot emotion cognition method based on deep learning
CN113657168B (en) Student learning emotion recognition method based on convolutional neural network
CN110458235B (en) Motion posture similarity comparison method in video
CN107016046A (en) The intelligent robot dialogue method and system of view-based access control model displaying
CN113392766A (en) Attention mechanism-based facial expression recognition method
CN111126143A (en) Deep learning-based exercise judgment guidance method and system
CN111666845A (en) Small sample deep learning multi-mode sign language recognition method based on key frame sampling
CN111126280A (en) Gesture recognition fusion-based aphasia patient auxiliary rehabilitation training system and method
CN114724224A (en) Multi-mode emotion recognition method for medical care robot
Gill et al. A deep learning approach for real time facial emotion recognition
JP2018005638A (en) Image recognition model learning device, image recognition unit, method and program
CN111680550A (en) Emotion information identification method and device, storage medium and computer equipment
CN110866962A (en) Virtual portrait and expression synchronization method based on convolutional neural network
CN113076905B (en) Emotion recognition method based on context interaction relation
CN112329663A (en) Micro-expression time detection method and device based on face image sequence
CN110766093A (en) Video target re-identification method based on multi-frame feature fusion
Kumar et al. Facial emotion recognition and detection using cnn
Jaiswal Facial expression classification using convolutional neural networking and its applications
Hori et al. Sign Language Recognition using the reuse of estimate results by each epoch
Vayadande et al. LipReadNet: A Deep Learning Approach to Lip Reading
Wu et al. Question-driven multiple attention (dqma) model for visual question answer

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant