CN108564007A - A kind of Emotion identification method and apparatus based on Expression Recognition - Google Patents
A kind of Emotion identification method and apparatus based on Expression Recognition Download PDFInfo
- Publication number
- CN108564007A CN108564007A CN201810255799.7A CN201810255799A CN108564007A CN 108564007 A CN108564007 A CN 108564007A CN 201810255799 A CN201810255799 A CN 201810255799A CN 108564007 A CN108564007 A CN 108564007A
- Authority
- CN
- China
- Prior art keywords
- expression
- result
- emotion identification
- expression recognition
- identified person
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The Emotion identification method and apparatus based on Expression Recognition that the invention discloses a kind of.The method includes acquiring the image of identified person and record acquisition time, it is handled using face recognition algorithms and exports face recognition result, face recognition result is input to deep neural network to be handled to obtain Expression Recognition result, Expression Recognition result and corresponding acquisition time are sequentially recorded as expression data in expression data library, it obtains multiple expression datas from expression data library to be analyzed, to obtain Emotion identification result to identified person;Described device include memory for storing program and for loading described program to execute a kind of processor of Emotion identification method based on Expression Recognition.The invention enables moods and emotion that robot efficiently could perceive and analyze people, and human-computer interaction can be carried out in a manner of more efficient, promote the interactive experience on sense organ.The present invention is applied to image recognition processing technical field.
Description
Technical field
The present invention relates to image recognition processing technical field, especially a kind of Emotion identification method based on Expression Recognition and
Device.
Background technology
Emotion identification refers to studying an automatic, efficient, accurate system to identify the state of human face expression, and then pass through
Human facial expression information understands the emotional state of people, such as glad, sad, surprised, angry etc..The research is in human-computer interaction, artificial
Intelligence etc. has important application value, is that the important of the fields such as computer vision, pattern-recognition, affection computation is ground
Study carefully project.
In the technical field for needing progress human-computer interaction, especially in robot technology, it usually needs can be to the feelings of people
Sense is analyzed, and to carry out effective human-computer interaction, brings the improvement on sense organ for the interactive experience of user, but existing man-machine
Interaction technique lacks effective sentiment analysis means, therefore a kind of existing human-computer interaction technology urgently feelings that can effectively identify people
The technological means of thread.And existing recognition of face and Expression Recognition technology can extract face part and be known from piece image
The expression for not going out face exports the expression type of face as a result, but it as a kind of algorithm, application is much stagnant
Afterwards in research, there is no show its due application value during man-machine affective interaction.
Invention content
In order to solve the above-mentioned technical problem, the first object of the present invention is to provide a kind of mood knowledge based on Expression Recognition
Other method, second is designed to provide a kind of Emotion identification device based on Expression Recognition.
The first technical solution for being taken of the present invention is:
A kind of Emotion identification method based on Expression Recognition, includes the following steps:
S1. it acquires the image of identified person and records acquisition time, using face recognition algorithms to the image of identified person
It is handled, to export face recognition result;
S2. face recognition result is input to and is handled by deep neural network trained in advance, to obtain table
Feelings recognition result, the Expression Recognition result include expression type;
S3. it using Expression Recognition result and corresponding acquisition time as expression data, is sequentially recorded in expression data library;
S4. multiple expression datas are obtained from expression data library, are analyzed according to the multiple expression data, to
To the Emotion identification result to identified person.
Further, the deep neural network is trained in advance by following steps:
Pre-training is carried out to deep neural network using ImageNet data sets;
Deep neural network is finely adjusted using fer-2013 data sets are improved, the improvement fer-2013 data sets are
Increase the data set for the extended formation of facial image obtained of swashing from internet on the basis of fer-2013 data sets.
Further, the facial image obtained that swashes from internet is the facial image containing glasses.
Further, the face recognition result is video flowing, and the step S2 is specifically included:
S201. by face recognition result in moment tiAnd moment tiT at the time of beforei-1、ti-2And ti-3It is corresponding
Frame is input to be handled by deep neural network trained in advance, to output time ti、ti-1、ti-2And ti-3It is right respectively
The Expression Recognition undetermined answered is as a result, wherein i is the serial number at moment;
S202. weighted sum judgment method is utilized, summation is weighted to each Expression Recognition result undetermined, to
To weighted sum as a result, according to weighted sum as a result, obtaining moment tiExpression Recognition result.
Further, the weighted sum judgment method specifically includes:
Each Expression Recognition result undetermined is denoted asWherein, i be to it is corresponding when
The serial number at quarter;
Equalization result is calculated using following formula:Wherein, X remembers for expression type
Number, i is the serial number at corresponding moment, and k is summation serial number,For weighted sum result;
IfThen with moment tiT at the time of corresponding Expression Recognition result undetermined is as required obtainiExpression
Recognition result, conversely, t at the time of previously to acquirei-1Expression Recognition result as it is required at the time of tiExpression Recognition
As a result.
Further, the step S4 is specifically included:
S401a. from the multiple expression datas for obtaining the continuous acquisition within the same period in expression data library;
S402a. judge whether the multiple expression data corresponds to same expression type, if so, with the expression class
Type is as Emotion identification result.
Further, the expression type includes glad, sad, angry, surprised and neutral, is also wrapped after the step S4
Include following steps:
If S5a. Emotion identification result is sadness, the information for pacifying identified person's mood is sent out, and inquire and known
Whether others asks to play the music releived;
If S6a. Emotion identification result is indignation, the information for prompting identified person to calm down mood is sent out, and inquire
Whether identified person asks to play light music;
S7a. the request of identified person is obtained, and corresponding music is played according to the request.
Further, the step S4 is specifically included:
S401b. from the multiple expression datas for obtaining the continuous acquisition within the same period in expression data library;
S402b. the corresponding score of each expression data is searched in preset expression score graph, with the expression data
Collected number and corresponding score within the period are summed for weight, to obtain mood score value;
S403b. the corresponding mood grade of mood score value is searched in preset mood score graph, and by mood grade
As Emotion identification result.
Further, the mood grade includes good, general and poor, further includes following step after the step S4
Suddenly:
If S5b. Emotion identification result is good, the information for praising identified person is sent out;
If S6b. Emotion identification result is general, the information for encouraging identified person is sent out;
If S7b. Emotion identification result is poor, the information for showing loving care for identified person is sent out.
The second technical solution for being taken of the present invention is:
A kind of Emotion identification device based on Expression Recognition, including:
Memory, for storing at least one program;
Processor, it is a kind of based on Expression Recognition described in the first technical solution to execute for loading at least one program
Emotion identification method.
The beneficial effects of the invention are as follows:
Expression Recognition technology is applied in Emotion identification by the present invention, can be applied in automatic fields such as robots, be made
Obtaining robot efficiently can perceive and analyze the mood and emotion of people, and machine between men can be in a manner of more efficient
Human-computer interaction is carried out, the interactive experience on sense organ is promoted.
Description of the drawings
Fig. 1 is the method for the present invention flow chart.
Specific implementation mode
Embodiment 1
In the present embodiment, a kind of Emotion identification method based on Expression Recognition, as shown in Figure 1, including the following steps:
S1. it acquires the image of identified person and records acquisition time, using face recognition algorithms to the image of identified person
It is handled, to export face recognition result;
S2. face recognition result is input to and is handled by deep neural network trained in advance, to obtain table
Feelings recognition result, the Expression Recognition result include expression type;
S3. it using Expression Recognition result and corresponding acquisition time as expression data, is sequentially recorded in expression data library;
S4. multiple expression datas are obtained from expression data library, are analyzed according to the multiple expression data, to
To the Emotion identification result to identified person.
It in step sl, can be identified to acquire in a manner of shooting single photo or shooting video using camera
The image of people.Face recognition algorithms can be dlib scheduling algorithms, can identify the face portion in the image of identified person
And extract, it can be identified to single photo or to video flowing.
In step s 2, deep neural network can use Vgg-Net16, have Expression Recognition after training in advance
Ability can recognize that the human face expression in face recognition result, and corresponding expression type is defeated as Expression Recognition result
Go out.The expression type that deep neural network can recognize that, can be by depth including glad, sad, surprised, angry and neutral etc.
The training method of neural network determines.The profound level that deep neural network especially convolutional neural networks can extract image is special
Sign, can accurately export Expression Recognition result.
In step s3, expression data library records expression data in the form of time shaft, i.e., Expression Recognition result and will adopt
The collection time corresponds to and stores.Establish expression data library so that multiple expression datas can be integrated in step s 4 to be divided
Analysis so that more accurate to the Emotion identification result of identified person.
Expression Recognition technology is applied in Emotion identification by Emotion identification method of the present invention, can apply robot etc. from
Dynamicization field so that robot efficiently can perceive and analyze the mood and emotion of people, and machine between men can be with more
Add efficient mode to carry out human-computer interaction, promotes the interactive experience on sense organ.
It is further used as preferred embodiment, the deep neural network is trained in advance by following steps:
Pre-training is carried out to deep neural network using ImageNet data sets;
Deep neural network is finely adjusted using fer-2013 data sets are improved, the improvement fer-2013 data sets are
Increase the data set for the extended formation of facial image obtained of swashing from internet on the basis of fer-2013 data sets.
It is further used as preferred embodiment, the facial image obtained that swashes from internet is containing glasses
Facial image.
In order to make deep neural network be more suitable for the method for the present invention, Vgg-Net16 can be used as depth nerve net
Network first carries out pre-training (pre-training) with ImageNet data sets to Vgg-Net16, then again with improvement fer-2013
Data set is finely adjusted (fine-tune) deep neural network.It is preferable to use following parameters in training process:In batches
It is 64, learning rate 0.01,40000 step result of iteration tends towards stability.
In order to make deep neural network be more suitable for the method for the present invention, in the training process to deep neural network, make
Traditional fer-2013 data sets are replaced with fer-2013 data sets are improved.Data contained by traditional fer-2013 data sets
Less, it is the facial image worn glasses especially to lack content, and influence is trained the applicability for the deep neural network come by this.For
Extension fer-2013 data sets, can crawl new facial image from internet, the facial image especially worn glasses, and
It adds it in fer-2013 data sets and obtains improving fer-2013 data sets.
It can also be to improving fer-2013 numbers before being trained to deep neural network with improvement fer-2013 data sets
Pre-processed according to the facial image of concentration, including image is overturn, is rotated, is expanded, greyscale transformation, size adjust and figure
As calibration, image can also be subtracted to mean value, such as subtract (104., 117., 124.), to be normalized, then passed through
Dlib carries out Face datection and face segmentation, then carries out gray processing, and picture size is adjusted to 96*96.
It is further used as preferred embodiment, the face recognition result is video flowing, and the step S2 is specifically included:
S201. by face recognition result in moment tiAnd moment tiT at the time of beforei-1、ti-2And ti-3It is corresponding
Frame is input to be handled by deep neural network trained in advance, to output time ti、ti-1、ti-2And ti-3It is right respectively
The Expression Recognition undetermined answered is as a result, wherein i is the serial number at moment;
S202. weighted sum judgment method is utilized, summation is weighted to each Expression Recognition result undetermined, to
To weighted sum as a result, according to weighted sum as a result, obtaining moment tiExpression Recognition result.
If video flowing is identified in face recognition algorithms in step S1, the face recognition result of output also will
It is video stream, also will is the picture for including continuous multiple frames.
Due in the image acquisition process to identified person, being easy to move or be imaged unintelligible etc. make because of identified person
It is fuzzy at image, if the wherein frame only for video pictures is individually identified, it is incorrect to be easy to cause identification.
In order to improve the accuracy for the Expression Recognition for being directed to video pictures, the knowledge to continuous multiple frames picture can be considered
Not as a result, to determine the recognition result to wherein a certain frame picture.
Before executing step S201, obtains and moment t is determinedi-1Frame gesture identification result.
In step S201, in order to moment tiFrame carry out Expression Recognition, can be with continuous acquisition moment tiAt the time of before
ti-1、ti-2And ti-3Corresponding frame.Then this 4 frames are input in deep neural network and are identified, output 4 waits for
Determine Expression Recognition result.Using weighted sum judgment method, weight is assigned to this 4 Expression Recognition results undetermined, and according to root
Moment t is determined according to weighted sum resultiExpression Recognition result.
It is further used as preferred embodiment, the weighted sum judgment method specifically includes:
Each Expression Recognition result undetermined is denoted asWherein, i be to it is corresponding when
The serial number at quarter;
Equalization result is calculated using following formula:Wherein, X remembers for expression type
Number, i is the serial number at corresponding moment, and k is summation serial number,For weighted sum result;
IfThen with moment tiT at the time of corresponding Expression Recognition result undetermined is as required obtainiExpression
Recognition result, conversely, t at the time of previously to acquirei-1Expression Recognition result as it is required at the time of tiExpression Recognition
As a result.
Expression type X can be glad, sad, surprised, angry and neutral etc..And according to weighted sum resultValue,
Come determine with it is identified in this identification process at the time of tiCorresponding Expression Recognition undetermined with last time as a result, identified
T at the time of identified in the processi-1Expression Recognition result as it is required at the time of tiExpression Recognition result.
In order to carry out step S4 to analyze to obtain Emotion identification according to expression data as a result, present embodiments providing two kinds of tools
The implementation method of body.
It is further used as preferred embodiment, the step S4 is specifically included:
S401a. from the multiple expression datas for obtaining the continuous acquisition within the same period in expression data library;
S402a. judge whether the multiple expression data corresponds to same expression type, if so, with the expression class
Type is as Emotion identification result.
Step S401a and S402a are that the first analyzes to obtain the specific implementation side of Emotion identification result according to expression data
Method.A period, such as 5s can be set first, the minimum time unit as analysis mood.It is obtained from expression data library
Multiple expression datas of the continuous acquisition in 5s are taken, such as can be in 20180101160000-20180101160005 this 5s,
Can also be at the time of can also be real-time acquisition and before in 5s in 20171231120316-20171231120321 this 5s
Collected multiple expression datas, analyze whether it corresponds to same expression type.If corresponding within the 5s periods
Multiple expression datas are same expression type, such as " happiness ", can conclude Emotion identification result corresponding in this 5s
For " happiness ".Above-mentioned this real-time analysis method can reduce the identification error that identified person's mood instantaneous variation is brought, and improve
Emotion identification accuracy.
It is further used as preferred embodiment, the expression type includes glad, sad, angry, surprised and neutral, institute
It states further comprising the steps of after step S4:
If S5a. Emotion identification result is sadness, the information for pacifying identified person's mood is sent out, and inquire and known
Whether others asks to play the music releived;
If S6a. Emotion identification result is indignation, the information for prompting identified person to calm down mood is sent out, and inquire
Whether identified person asks to play light music;
S7a. the request of identified person is obtained, and corresponding music is played according to the request.
It executes the first to be analyzed after obtaining the concrete methods of realizing of Emotion identification result according to expression data, can also be performed
Corresponding affective interaction step S5a, S6a and S7a.Step S5a and S6a judge whether the mood of identified person is sad or anger
Then anger sends out relevant information and inquiry request.Send out for pacify identified person's mood information and for prompt known
The information that others calms down mood can be voice messaging or text information, and the present embodiment is applied when on anthropomorphic robot, may be used also
To be expression that robot makes, such as the information for pacifying identified person's mood can be the smile expression of robot, use
The information for calming down mood in prompt identified person can be the worry expression of robot.
It is further used as preferred embodiment, the step S4 is specifically included:
S401b. from the multiple expression datas for obtaining the continuous acquisition within the same period in expression data library;
S402b. the corresponding score of each expression data is searched in preset expression score graph, with the expression data
Collected number and corresponding score within the period are summed for weight, to obtain mood score value;
S403b. the corresponding mood grade of mood score value is searched in preset mood score graph, and by mood grade
As Emotion identification result.
Step S401b-S403b is to be analyzed to obtain the specific implementation side of Emotion identification result according to expression data for second
Method.Time period can be one day, one week, January or 1 year etc..This kind of method can be to identified person when one section longer
Between mood in section do comprehensive analysis, to obtain its substantially emotional levels within the time period.
Mood score value can be calculate by the following formula:In formula, T is mood score value, and i is to indicate expression type
Serial number, for example, it can be set to i=0,1,2,3,4 corresponding glad, sad, surprised, angry and neutral expression respectively;QiIt is corresponding
The score of expression type, QiIt can be obtained by inquiry table feelings score graph, a kind of preset expression score graph such as 1 institute of table
Show, such as Q1For the corresponding score of sad expression, i.e., 30 points;NiFor the number that corresponding expression data occurs within the period, such as
N2The number being collected within the period for surprised expression;MkFor the total number of corresponding all expression datas in the period,
Equal to the sum for the number that institute's espressiove type in the period is collected, i.e.,
Table 1
Expression type | Score |
It is glad | 100 |
It is neutral, surprised | 60 |
It is sad | 30 |
Indignation | 0 |
A kind of preset mood score graph is as shown in table 2, after mood score value T is calculated, can be looked into according to table 2
Ask corresponding mood grade, i.e. the Emotion identification result to identified person in this period.
Table 2
Mood grade | Score |
Well | 80-100 |
Generally | 60-80 |
It is poor | 0-60 |
Be further used as preferred embodiment, the mood grade include it is good, general and poor, the step S4 it
It is further comprising the steps of afterwards:
If S5b. Emotion identification result is good, the information for praising identified person is sent out;
If S6b. Emotion identification result is general, the information for encouraging identified person is sent out;
If S7b. Emotion identification result is poor, the information for showing loving care for identified person is sent out.
It executes second and is analyzed after obtaining the concrete methods of realizing of Emotion identification result according to expression data, be can also be performed
Corresponding affective interaction step S5b, S6b and S7b.Step S5b-S7b judges the mood grade of identified person, then sends out correlation
Information.The information sent out can be voice messaging or text information, and the present embodiment is applied when on anthropomorphic robot, be can also be
The expression that robot makes, such as smile expression and worry expression.
When robot executes step S5b-S7b, identified person can also be judged whether in face of robot, and make difference
Reaction.Such as identified person in face of robot when, information is sent out in the form of voice or robot expression, identified person is not
When in face of robot, information is sent out with immediate communication tools such as ring letters.It is poor be identified for Emotion identification result
Information can also be issued identified person relatives by people, seek to show loving care for for identified person in time.
By affective interaction step S5a-S6a and S5b-S7b, the interactivity of robot can be further increased, promotes quilt
The interactive experience and satisfaction for identifying people, make robot more intelligence and hommization, robot that can more effectively serve people.
Embodiment 2
In the present embodiment, a kind of Emotion identification device based on Expression Recognition, including:
Memory, for storing at least one program;
Processor, for loading at least one program to execute a kind of feelings based on Expression Recognition described in embodiment 1
Thread recognition methods.
Memory and processor may be mounted in robot, and robot further includes sensor and other necessary parts.
It is to be illustrated to the preferable implementation of the present invention, but the implementation is not limited to the invention above
Example, those skilled in the art can also make various equivalent variations or be replaced under the premise of without prejudice to spirit of that invention
It changes, these equivalent deformations or replacement are all contained in the application claim limited range.
Claims (10)
1. a kind of Emotion identification method based on Expression Recognition, which is characterized in that include the following steps:
S1. it acquires the image of identified person and records acquisition time, the image of identified person is carried out using face recognition algorithms
Processing, to export face recognition result;
S2. face recognition result is input to and is handled by deep neural network trained in advance, known to obtain expression
Not as a result, the Expression Recognition result includes expression type;
S3. it using Expression Recognition result and corresponding acquisition time as expression data, is sequentially recorded in expression data library;
S4. multiple expression datas are obtained from expression data library, are analyzed according to the multiple expression data, to obtain pair
The Emotion identification result of identified person.
2. a kind of Emotion identification method based on Expression Recognition according to claim 1, which is characterized in that pass through following step
Suddenly the deep neural network is trained in advance:
Pre-training is carried out to deep neural network using ImageNet data sets;
Using improve fer-2013 data sets deep neural network is finely adjusted, the improvements fer-2013 data sets for
Increase the data set for the extended formation of facial image obtained of swashing from internet on the basis of fer-2013 data sets.
3. a kind of Emotion identification method based on Expression Recognition according to claim 2, which is characterized in that described from interconnection
The online facial image crawled is the facial image containing glasses.
4. a kind of Emotion identification method based on Expression Recognition according to claim 1, which is characterized in that the face is known
Other result is video flowing, and the step S2 is specifically included:
S201. by face recognition result in moment tiAnd moment tiT at the time of beforei-1、ti-2And ti-3Corresponding frame is defeated
Enter to by deep neural network trained in advance and handled, to output time ti、ti-1、ti-2And ti-3It is corresponding
Expression Recognition undetermined is as a result, wherein i is the serial number at moment;
S202. weighted sum judgment method is utilized, summation is weighted to each Expression Recognition result undetermined, to be added
Summed result is weighed, according to weighted sum as a result, obtaining moment tiExpression Recognition result.
5. a kind of Emotion identification method based on Expression Recognition according to claim 4, which is characterized in that the weighting is asked
It is specifically included with judgment method:
Each Expression Recognition result undetermined is denoted asWherein, i is the corresponding moment
Serial number;
Equalization result is calculated using following formula:Wherein, X is expression type mark, and i is
The serial number at corresponding moment, k are summation serial number,For weighted sum result;
IfThen with moment tiT at the time of corresponding Expression Recognition result undetermined is as required obtainiExpression Recognition
As a result, conversely, t at the time of previously to acquirei-1Expression Recognition result as it is required at the time of tiExpression Recognition knot
Fruit.
6. according to a kind of Emotion identification method based on Expression Recognition of claim 1-5 any one of them, which is characterized in that institute
Step S4 is stated to specifically include:
S401a. from the multiple expression datas for obtaining the continuous acquisition within the same period in expression data library;
S402a. judge whether the multiple expression data corresponds to same expression type, if so, making with the expression type
For Emotion identification result.
7. a kind of Emotion identification method based on Expression Recognition according to claim 6, which is characterized in that the expression class
Type is further comprising the steps of after the step S4 including glad, sad, angry, surprised and neutral:
If S5a. Emotion identification result is sadness, the information for pacifying identified person's mood is sent out, and inquire identified person
Whether request plays the music releived;
If S6a. Emotion identification result is indignation, the information for prompting identified person to calm down mood is sent out, and inquire and known
Whether others asks to play light music;
S7a. the request of identified person is obtained, and corresponding music is played according to the request.
8. according to a kind of Emotion identification method based on Expression Recognition of claim 1-5 any one of them, which is characterized in that institute
Step S4 is stated to specifically include:
S401b. from the multiple expression datas for obtaining the continuous acquisition within the same period in expression data library;
S402b. the corresponding score of each expression data is searched in preset expression score graph, with the expression data in institute
The collected number and corresponding score stated in the period are summed for weight, to obtain mood score value;
S403b. search the corresponding mood grade of mood score value in preset mood score graph, and using mood grade as
Emotion identification result.
9. a kind of Emotion identification method based on Expression Recognition according to claim 8, which is characterized in that described mood etc.
Grade is further comprising the steps of after the step S4 including good, general and poor:
If S5b. Emotion identification result is good, the information for praising identified person is sent out;
If S6b. Emotion identification result is general, the information for encouraging identified person is sent out;
If S7b. Emotion identification result is poor, the information for showing loving care for identified person is sent out.
10. a kind of Emotion identification device based on Expression Recognition, which is characterized in that including:
Memory, for storing at least one program;
Processor requires any one of 1-9 described a kind of based on expression knowledge for loading at least one program with perform claim
Other Emotion identification method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810255799.7A CN108564007B (en) | 2018-03-27 | 2018-03-27 | Emotion recognition method and device based on expression recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810255799.7A CN108564007B (en) | 2018-03-27 | 2018-03-27 | Emotion recognition method and device based on expression recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108564007A true CN108564007A (en) | 2018-09-21 |
CN108564007B CN108564007B (en) | 2021-10-22 |
Family
ID=63533396
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810255799.7A Active CN108564007B (en) | 2018-03-27 | 2018-03-27 | Emotion recognition method and device based on expression recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108564007B (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109124658A (en) * | 2018-09-27 | 2019-01-04 | 江苏银河数字技术有限公司 | Mood detection system and method based on intelligent desk pad |
CN109376621A (en) * | 2018-09-30 | 2019-02-22 | 北京七鑫易维信息技术有限公司 | A kind of sample data generation method, device and robot |
CN109584579A (en) * | 2018-12-21 | 2019-04-05 | 平安科技(深圳)有限公司 | Method for controlling traffic signal lights and computer equipment based on recognition of face |
CN109635680A (en) * | 2018-11-26 | 2019-04-16 | 深圳云天励飞技术有限公司 | Multitask attribute recognition approach, device, electronic equipment and storage medium |
CN109670393A (en) * | 2018-09-26 | 2019-04-23 | 平安科技(深圳)有限公司 | Human face data acquisition method, unit and computer readable storage medium |
CN109684978A (en) * | 2018-12-18 | 2019-04-26 | 深圳壹账通智能科技有限公司 | Employees'Emotions monitoring method, device, computer equipment and storage medium |
CN109784144A (en) * | 2018-11-29 | 2019-05-21 | 北京邮电大学 | A kind of kinship recognition methods and system |
CN109800734A (en) * | 2019-01-30 | 2019-05-24 | 北京津发科技股份有限公司 | Human facial expression recognition method and device |
CN109829364A (en) * | 2018-12-18 | 2019-05-31 | 深圳云天励飞技术有限公司 | A kind of expression recognition method, device and recommended method, device |
CN109829362A (en) * | 2018-12-18 | 2019-05-31 | 深圳壹账通智能科技有限公司 | Safety check aided analysis method, device, computer equipment and storage medium |
CN109877806A (en) * | 2019-03-05 | 2019-06-14 | 哈尔滨理工大学 | Science and technology center's guide robot face device and control with mood resolving ability |
CN110046580A (en) * | 2019-04-16 | 2019-07-23 | 广州大学 | A kind of man-machine interaction method and system based on Emotion identification |
CN110046955A (en) * | 2019-03-12 | 2019-07-23 | 平安科技(深圳)有限公司 | Marketing method, device, computer equipment and storage medium based on recognition of face |
CN110046576A (en) * | 2019-04-17 | 2019-07-23 | 内蒙古工业大学 | A kind of method and apparatus of trained identification facial expression |
CN110154757A (en) * | 2019-05-30 | 2019-08-23 | 电子科技大学 | The multi-faceted safe driving support method of bus |
CN110427848A (en) * | 2019-07-23 | 2019-11-08 | 京东方科技集团股份有限公司 | A kind of psychoanalysis system |
CN110472512A (en) * | 2019-07-19 | 2019-11-19 | 河海大学 | A kind of face state identification method and its device based on deep learning |
CN110516593A (en) * | 2019-08-27 | 2019-11-29 | 京东方科技集团股份有限公司 | A kind of emotional prediction device, emotional prediction method and display device |
CN111127830A (en) * | 2018-11-01 | 2020-05-08 | 奇酷互联网络科技(深圳)有限公司 | Alarm method, alarm system and readable storage medium based on monitoring equipment |
CN111402523A (en) * | 2020-03-24 | 2020-07-10 | 宋钰堃 | Medical alarm system and method based on facial image recognition |
CN112060080A (en) * | 2020-07-31 | 2020-12-11 | 深圳市优必选科技股份有限公司 | Robot control method and device, terminal equipment and storage medium |
CN112347236A (en) * | 2020-11-16 | 2021-02-09 | 友谊国际工程咨询股份有限公司 | Intelligent engineering consultation method and system based on AI (Artificial Intelligence) calculated quantity and computer equipment thereof |
CN112975963A (en) * | 2021-02-23 | 2021-06-18 | 广东智源机器人科技有限公司 | Robot action generation method and device and robot |
CN113305856A (en) * | 2021-05-25 | 2021-08-27 | 中山大学 | Accompany type robot of intelligent recognition expression |
CN113440122A (en) * | 2021-08-02 | 2021-09-28 | 北京理工新源信息科技有限公司 | Emotion fluctuation monitoring and identification big data early warning system based on vital signs |
CN116665273A (en) * | 2023-05-31 | 2023-08-29 | 南京林业大学 | Robot man-machine interaction method based on expression recognition and emotion quantitative analysis and calculation |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8538091B2 (en) * | 2007-06-29 | 2013-09-17 | Canon Kabushiki Kaisha | Image processing apparatus and method, and storage medium |
US20140050408A1 (en) * | 2012-08-14 | 2014-02-20 | Samsung Electronics Co., Ltd. | Method for on-the-fly learning of facial artifacts for facial emotion recognition |
CN104777910A (en) * | 2015-04-23 | 2015-07-15 | 福州大学 | Method and system for applying expression recognition to display device |
CN105335691A (en) * | 2014-08-14 | 2016-02-17 | 南京普爱射线影像设备有限公司 | Smiling face identification and encouragement system |
CN105354527A (en) * | 2014-08-20 | 2016-02-24 | 南京普爱射线影像设备有限公司 | Negative expression recognizing and encouraging system |
CN106123850A (en) * | 2016-06-28 | 2016-11-16 | 哈尔滨工程大学 | AUV prestowage multibeam sonar underwater topography mapping modification method |
CN106650621A (en) * | 2016-11-18 | 2017-05-10 | 广东技术师范学院 | Deep learning-based emotion recognition method and system |
CN206484561U (en) * | 2016-12-21 | 2017-09-12 | 深圳市智能机器人研究院 | A kind of intelligent domestic is accompanied and attended to robot |
CN107341688A (en) * | 2017-06-14 | 2017-11-10 | 北京万相融通科技股份有限公司 | The acquisition method and system of a kind of customer experience |
CN107609458A (en) * | 2016-07-20 | 2018-01-19 | 平安科技(深圳)有限公司 | Emotional feedback method and device based on expression recognition |
-
2018
- 2018-03-27 CN CN201810255799.7A patent/CN108564007B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8538091B2 (en) * | 2007-06-29 | 2013-09-17 | Canon Kabushiki Kaisha | Image processing apparatus and method, and storage medium |
US20140050408A1 (en) * | 2012-08-14 | 2014-02-20 | Samsung Electronics Co., Ltd. | Method for on-the-fly learning of facial artifacts for facial emotion recognition |
CN105335691A (en) * | 2014-08-14 | 2016-02-17 | 南京普爱射线影像设备有限公司 | Smiling face identification and encouragement system |
CN105354527A (en) * | 2014-08-20 | 2016-02-24 | 南京普爱射线影像设备有限公司 | Negative expression recognizing and encouraging system |
CN104777910A (en) * | 2015-04-23 | 2015-07-15 | 福州大学 | Method and system for applying expression recognition to display device |
CN106123850A (en) * | 2016-06-28 | 2016-11-16 | 哈尔滨工程大学 | AUV prestowage multibeam sonar underwater topography mapping modification method |
CN107609458A (en) * | 2016-07-20 | 2018-01-19 | 平安科技(深圳)有限公司 | Emotional feedback method and device based on expression recognition |
CN106650621A (en) * | 2016-11-18 | 2017-05-10 | 广东技术师范学院 | Deep learning-based emotion recognition method and system |
CN206484561U (en) * | 2016-12-21 | 2017-09-12 | 深圳市智能机器人研究院 | A kind of intelligent domestic is accompanied and attended to robot |
CN107341688A (en) * | 2017-06-14 | 2017-11-10 | 北京万相融通科技股份有限公司 | The acquisition method and system of a kind of customer experience |
Non-Patent Citations (2)
Title |
---|
MARCUS VINICIUS ZAVAREZ ET.AL: "Cross-Database Facial Expression Recognition Based on Fine-Tuned Deep Convolutional Network", 《2017 30TH SIBGRAPI CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI)》 * |
张海婷: "基于深度神经网络的人体动作及自发表情识别", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109670393B (en) * | 2018-09-26 | 2023-12-19 | 平安科技(深圳)有限公司 | Face data acquisition method, equipment, device and computer readable storage medium |
CN109670393A (en) * | 2018-09-26 | 2019-04-23 | 平安科技(深圳)有限公司 | Human face data acquisition method, unit and computer readable storage medium |
CN109124658A (en) * | 2018-09-27 | 2019-01-04 | 江苏银河数字技术有限公司 | Mood detection system and method based on intelligent desk pad |
CN109376621A (en) * | 2018-09-30 | 2019-02-22 | 北京七鑫易维信息技术有限公司 | A kind of sample data generation method, device and robot |
CN111127830A (en) * | 2018-11-01 | 2020-05-08 | 奇酷互联网络科技(深圳)有限公司 | Alarm method, alarm system and readable storage medium based on monitoring equipment |
CN109635680A (en) * | 2018-11-26 | 2019-04-16 | 深圳云天励飞技术有限公司 | Multitask attribute recognition approach, device, electronic equipment and storage medium |
CN109635680B (en) * | 2018-11-26 | 2021-07-06 | 深圳云天励飞技术有限公司 | Multitask attribute identification method and device, electronic equipment and storage medium |
CN109784144A (en) * | 2018-11-29 | 2019-05-21 | 北京邮电大学 | A kind of kinship recognition methods and system |
CN109829364A (en) * | 2018-12-18 | 2019-05-31 | 深圳云天励飞技术有限公司 | A kind of expression recognition method, device and recommended method, device |
CN109684978A (en) * | 2018-12-18 | 2019-04-26 | 深圳壹账通智能科技有限公司 | Employees'Emotions monitoring method, device, computer equipment and storage medium |
CN109829362A (en) * | 2018-12-18 | 2019-05-31 | 深圳壹账通智能科技有限公司 | Safety check aided analysis method, device, computer equipment and storage medium |
WO2020125217A1 (en) * | 2018-12-18 | 2020-06-25 | 深圳云天励飞技术有限公司 | Expression recognition method and apparatus and recommendation method and apparatus |
CN109584579A (en) * | 2018-12-21 | 2019-04-05 | 平安科技(深圳)有限公司 | Method for controlling traffic signal lights and computer equipment based on recognition of face |
CN109800734A (en) * | 2019-01-30 | 2019-05-24 | 北京津发科技股份有限公司 | Human facial expression recognition method and device |
CN109877806A (en) * | 2019-03-05 | 2019-06-14 | 哈尔滨理工大学 | Science and technology center's guide robot face device and control with mood resolving ability |
CN110046955A (en) * | 2019-03-12 | 2019-07-23 | 平安科技(深圳)有限公司 | Marketing method, device, computer equipment and storage medium based on recognition of face |
CN110046580A (en) * | 2019-04-16 | 2019-07-23 | 广州大学 | A kind of man-machine interaction method and system based on Emotion identification |
CN110046576A (en) * | 2019-04-17 | 2019-07-23 | 内蒙古工业大学 | A kind of method and apparatus of trained identification facial expression |
CN110154757A (en) * | 2019-05-30 | 2019-08-23 | 电子科技大学 | The multi-faceted safe driving support method of bus |
CN110472512A (en) * | 2019-07-19 | 2019-11-19 | 河海大学 | A kind of face state identification method and its device based on deep learning |
CN110427848B (en) * | 2019-07-23 | 2022-04-12 | 京东方科技集团股份有限公司 | Mental analysis system |
CN110427848A (en) * | 2019-07-23 | 2019-11-08 | 京东方科技集团股份有限公司 | A kind of psychoanalysis system |
CN110516593A (en) * | 2019-08-27 | 2019-11-29 | 京东方科技集团股份有限公司 | A kind of emotional prediction device, emotional prediction method and display device |
CN111402523A (en) * | 2020-03-24 | 2020-07-10 | 宋钰堃 | Medical alarm system and method based on facial image recognition |
CN112060080A (en) * | 2020-07-31 | 2020-12-11 | 深圳市优必选科技股份有限公司 | Robot control method and device, terminal equipment and storage medium |
CN112347236A (en) * | 2020-11-16 | 2021-02-09 | 友谊国际工程咨询股份有限公司 | Intelligent engineering consultation method and system based on AI (Artificial Intelligence) calculated quantity and computer equipment thereof |
CN112975963A (en) * | 2021-02-23 | 2021-06-18 | 广东智源机器人科技有限公司 | Robot action generation method and device and robot |
CN112975963B (en) * | 2021-02-23 | 2022-08-23 | 广东优碧胜科技有限公司 | Robot action generation method and device and robot |
CN113305856A (en) * | 2021-05-25 | 2021-08-27 | 中山大学 | Accompany type robot of intelligent recognition expression |
CN113440122A (en) * | 2021-08-02 | 2021-09-28 | 北京理工新源信息科技有限公司 | Emotion fluctuation monitoring and identification big data early warning system based on vital signs |
CN113440122B (en) * | 2021-08-02 | 2023-08-22 | 北京理工新源信息科技有限公司 | Emotion fluctuation monitoring and identifying big data early warning system based on vital signs |
CN116665273A (en) * | 2023-05-31 | 2023-08-29 | 南京林业大学 | Robot man-machine interaction method based on expression recognition and emotion quantitative analysis and calculation |
CN116665273B (en) * | 2023-05-31 | 2023-11-17 | 南京林业大学 | Robot man-machine interaction method based on expression recognition and emotion quantitative analysis and calculation |
Also Published As
Publication number | Publication date |
---|---|
CN108564007B (en) | 2021-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108564007A (en) | A kind of Emotion identification method and apparatus based on Expression Recognition | |
CN108256433B (en) | Motion attitude assessment method and system | |
CN110334626B (en) | Online learning system based on emotional state | |
US9724824B1 (en) | Sensor use and analysis for dynamic update of interaction in a social robot | |
EP3644786B1 (en) | Method for treating a surface | |
CN109765991A (en) | Social interaction system is used to help system and non-transitory computer-readable storage media that user carries out social interaction | |
WO2019137538A1 (en) | Emotion representative image to derive health rating | |
CN109117952A (en) | A method of the robot emotion cognition based on deep learning | |
CN113139439B (en) | Online learning concentration evaluation method and device based on face recognition | |
CN115951786B (en) | Creation method of multi-junction creative social game by utilizing AIGC technology | |
CN114022512A (en) | Exercise assisting method, apparatus and medium | |
Parvathi et al. | Emotion Analysis Using Deep Learning | |
JP2019046476A (en) | Emotion interaction method and robot system based on humor recognition | |
CN110909621A (en) | Body-building guidance system based on vision | |
KR20190125668A (en) | Apparatus and method for analyzing emotional status of pet | |
CN111104815A (en) | Psychological assessment method and device based on emotion energy perception | |
CN109902920A (en) | Management method, device, equipment and the storage medium of user's growth system | |
CN108681412B (en) | Emotion recognition device and method based on array type touch sensor | |
CN112036328A (en) | Bank customer satisfaction calculation method and device | |
Abedi et al. | Rehabilitation exercise repetition segmentation and counting using skeletal body joints | |
Saidani et al. | An efficient human activity recognition using hybrid features and transformer model | |
CN110458076A (en) | A kind of teaching method based on computer vision and system | |
US20190213403A1 (en) | Augmented reality predictions using machine learning | |
JP7014761B2 (en) | Cognitive function estimation method, computer program and cognitive function estimation device | |
Gamage et al. | Academic depression detection using behavioral aspects for Sri Lankan university students |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |