CN106228145B - A kind of facial expression recognizing method and equipment - Google Patents
A kind of facial expression recognizing method and equipment Download PDFInfo
- Publication number
- CN106228145B CN106228145B CN201610631259.5A CN201610631259A CN106228145B CN 106228145 B CN106228145 B CN 106228145B CN 201610631259 A CN201610631259 A CN 201610631259A CN 106228145 B CN106228145 B CN 106228145B
- Authority
- CN
- China
- Prior art keywords
- expression
- identified
- human face
- parameter group
- face region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Embodiments of the present invention provide a kind of facial expression recognizing method, this method comprises: obtaining human face region to be identified;Determine the corresponding expression parameter group of the human face region to be identified;The corresponding expression parameter group of human face region to be identified of the corresponding relationship and the determination of expression parameter group and expression label according to the pre-stored data, determines the expression label of the human face region to be identified, wherein the expression label is for describing human face expression.This method provides an exact facial expression recognition as a result, meeting the demand that user needs an exact facial expression recognition result for user, brings better experience for user.In addition, embodiments of the present invention provide a kind of facial expression recognition equipment.
Description
Technical field
Embodiments of the present invention are related to technical field of face recognition, more specifically, embodiments of the present invention are related to one
Kind facial expression recognizing method and equipment.
Background technique
Background that this section is intended to provide an explanation of the embodiments of the present invention set forth in the claims or context.Herein
Description recognizes it is the prior art not because not being included in this section.
With the continuous development of face recognition technology, facial expression recognition technology also receives the favor of people.Currently, micro-
Soft to develop facial expression recognition interface, user identifies the expression of face in the human face region of image using the interface, specifically
Are as follows: the value of the expression parameter of eight dimensions is obtained from human face region, and using the expression parameter value of eight dimensions as a table
Feelings parameter group is supplied to user side and is shown, this 8 dimensions are as follows: angry (anger), despises (contempt), detests
(disgust), fear (fear), happy (happiness), neutral (neutral), sad (sadness), surprised
(surprise).Wherein, human face expression and expression parameter group correspond, and the value of expression parameter is for characterizing current face's expression
Index under the expression parameter.
Although current this facial expression recognition mode can obtain the value of the expression parameter of multiple dimensions, it uses
Family possibly can not obtain an exact human face expression according to the value of multiple expression parameters, such as: when sad and neutral two kinds of tables
The value of feelings parameter is respectively 0.5, when the values of other expression parameters is 0, user can not determine current face's expression be it is sad or
Neutrality, to be unable to satisfy the demand that user needs an exact facial expression recognition result.
Summary of the invention
Facial expression recognizing method in the prior art is unable to satisfy user and needs an exact facial expression recognition knot
The demand of fruit.Thus, it is also very desirable to a kind of improved facial expression recognizing method, to provide an exact face for user
Expression Recognition is as a result, bring better experience for user.
In the present context, embodiments of the present invention are intended to provide a kind of facial expression recognizing method and equipment.
In the first aspect of embodiment of the present invention, a kind of facial expression recognizing method is provided, comprising:
Obtain human face region to be identified;
Determine the corresponding expression parameter group of the human face region to be identified;
The corresponding relationship of expression parameter group and expression label according to the pre-stored data and the face to be identified of the determination
The corresponding expression parameter group in region, determines the expression label of the human face region to be identified, wherein the expression label is for retouching
State human face expression.
In the second aspect of embodiment of the present invention, a kind of facial expression recognition equipment is provided, comprising:
Module is obtained, for obtaining human face region to be identified;
Parameter determination module, for determining the corresponding expression parameter group of the human face region to be identified;
Expression determining module, for the corresponding relationship of expression parameter group and expression label according to the pre-stored data, Yi Jisuo
The corresponding expression parameter group of determining human face region to be identified is stated, determines the expression label of the human face region to be identified, wherein
The expression label is for describing human face expression.
In the third aspect of embodiment of the present invention, a kind of facial expression recognition equipment is provided, for example, may include
Memory and processor, wherein processor can be used for reading the program in memory, execute following process:
Obtain human face region to be identified;
Determine the corresponding expression parameter group of the human face region to be identified;
The corresponding relationship of expression parameter group and expression label according to the pre-stored data and the face to be identified of the determination
The corresponding expression parameter group in region, determines the expression label of the human face region to be identified, wherein the expression label is for retouching
State human face expression.
In the fourth aspect of embodiment of the present invention, a kind of program product is provided comprising program code, when described
When program product is run, said program code is for executing following procedure:
Obtain human face region to be identified;
Determine the corresponding expression parameter group of the human face region to be identified;
The corresponding relationship of expression parameter group and expression label according to the pre-stored data and the face to be identified of the determination
The corresponding expression parameter group in region, determines the expression label of the human face region to be identified, wherein the expression label is for retouching
State human face expression.
The facial expression recognizing method and equipment of embodiment according to the present invention, by pre-stored expression parameter group with
The corresponding relationship of expression label determines the expression label of human face region to be identified, to describe the face table of human face region to be identified
Feelings, to provide an exact facial expression recognition for user as a result, meeting user needs an exact face table
The demand of feelings recognition result brings better experience for user.
Detailed description of the invention
The following detailed description is read with reference to the accompanying drawings, above-mentioned and other mesh of exemplary embodiment of the invention
, feature and advantage will become prone to understand.In the accompanying drawings, if showing by way of example rather than limitation of the invention
Dry embodiment, in which:
Fig. 1 schematically shows the application scenarios schematic diagram of embodiment according to the present invention;
Fig. 2 schematically shows a kind of process signals of facial expression recognizing method according to an embodiment of the invention
Figure;
Fig. 3 schematically shows the face area to be identified that switching setting position according to an embodiment of the invention is shown
The method flow schematic diagram of the expression label in domain;
Fig. 4 schematically shows the sides of the expression label of determination according to an embodiment of the invention human face region to be identified
Method flow diagram;
Fig. 5 schematically shows a kind of structural schematic diagram of facial expression recognition equipment according to an embodiment of the present invention.
In the accompanying drawings, identical or corresponding label indicates identical or corresponding part.
Specific embodiment
The principle and spirit of the invention are described below with reference to several illustrative embodiments.It should be appreciated that providing this
A little embodiments are used for the purpose of making those skilled in the art can better understand that realizing the present invention in turn, and be not with any
Mode limits the scope of the invention.On the contrary, these embodiments are provided so that this disclosure will be more thorough and complete, and energy
It is enough that the scope of the present disclosure is completely communicated to those skilled in the art.
One skilled in the art will appreciate that embodiments of the present invention can be implemented as a kind of system, device, equipment, method
Or computer program product.Therefore, the present disclosure may be embodied in the following forms, it may be assumed that complete hardware, complete software
The form that (including firmware, resident software, microcode etc.) or hardware and software combine.
Embodiment according to the present invention proposes a kind of facial expression recognizing method and equipment.
Herein, any number of elements in attached drawing is used to example rather than limitation and any name are only used for
It distinguishes, without any restrictions meaning.
Below with reference to several representative embodiments of the invention, the principle and spirit of the present invention are explained in detail.
Summary of the invention
The inventors discovered that in the prior art, although facial expression recognition mode can obtain the expression ginseng of multiple dimensions
Several values, still, user possibly can not obtain an exact human face expression according to the value of multiple expression parameters, such as: work as compassion
The value of wound and neutral two kinds of expression parameters is respectively 0.5, and when the value of other expression parameters is 0, user can not determine current face
Expression is sad or neutral, to be unable to satisfy the demand that user needs an exact facial expression recognition result.It is existing
Lack a kind of improved facial expression recognizing method in technology and can satisfy user and needs an exact facial expression recognition knot
The demand of fruit.
For this purpose, the present invention provides a kind of facial expression recognizing method and equipment, facial expression recognizing method may include:
Obtain human face region to be identified;Determine the corresponding expression parameter group of the human face region to be identified;Expression according to the pre-stored data
The corresponding expression parameter group of human face region to be identified of the corresponding relationship and the determination of parameter group and expression label determines
The expression label of the human face region to be identified, wherein the expression label is for describing human face expression.
After introduced the basic principles of the present invention, lower mask body introduces various non-limiting embodiment party of the invention
Formula.
Application scenarios overview
Referring initially to Fig. 1, as shown in Figure 1, being the application scenarios of facial expression recognizing method provided in an embodiment of the present invention
Schematic diagram, including user terminal 101 and server 102, wherein user 10 utilizes the camera shooting figure in user terminal 101
Picture, and the image taken is sent to the user terminal in the application of the facial expression recognition in 101, in the user terminal 101
Facial expression recognition application obtains the human face region to be identified in the image;Determine the corresponding expression of the human face region to be identified
Parameter group;The corresponding relationship of expression parameter group and expression label according to the pre-stored data and the face to be identified of the determination
The corresponding expression parameter group in region, determines the expression label of the human face region to be identified, wherein the expression label is for retouching
State human face expression.User terminal 101 is periodically from corresponding relationship and the preservation of server 102 expression parameter group and expression label.When
So, the facial expression recognition application in user terminal 101 can also be that the image taken is sent to server 102, by
Server 102 obtains the human face region to be identified in the image;Determine the corresponding expression parameter group of the human face region to be identified;
Expression parameter group according to the pre-stored data and the corresponding relationship of expression label and the human face region to be identified of the determination are corresponding
Expression parameter group, determine the expression label of the human face region to be identified, and by the determining human face region to be identified
Expression label be sent to the facial expression recognition application in user terminal 101.
Illustrative methods
Below with reference to the application scenarios of Fig. 1, the people of illustrative embodiments according to the present invention is described with reference to Fig. 2~Fig. 4
Face expression recognition method.It should be noted that above-mentioned application scenarios be merely for convenience of understanding spirit and principles of the present invention and
It shows, embodiments of the present invention are not limited in this respect.On the contrary, embodiments of the present invention can be applied to be applicable in
Any scene.
Fig. 2 is a kind of flow diagram of an embodiment of facial expression recognizing method provided by the invention, mainly includes
The process of facial expression recognition, executing subject can be the user terminal 101 in application scenarios overview, as shown in Fig. 2, of the invention
A kind of facial expression recognizing method that embodiment provides, includes the following steps:
Step 201, human face region to be identified is obtained.
In this step, human face region to be identified is obtained from image, the human face region to be identified obtained from image may
Only one, it is also possible to be multiple, wherein including a face in each human face region.
Step 202, the corresponding expression parameter group of the human face region to be identified is determined.
In this step, using the facial expression recognition interface that Microsoft involved in background technique develops, determine to be identified
Other existing facial expression recognitions that human face expression parameter group can be obtained can also be used in the corresponding expression parameter group of human face region
Interface determines the corresponding expression parameter group of human face region to be identified, is not detailed here.Wherein, the face table developed using Microsoft
The expression parameter in expression parameter group that feelings identification interface obtains after identifying to human face region to be identified includes: indignation
(anger), despise (contempt), detest (disgust), fear (fear), happy (happiness), neutrality
(neutral), sad (sadness), surprised (surprise) this eight human face expression parameters.
Step 203, the corresponding relationship of expression parameter group and expression label according to the pre-stored data and the determination to
It identifies the corresponding expression parameter group of human face region, determines the expression label of the human face region to be identified, wherein the expression mark
Label are for describing human face expression.
In this step, the corresponding relationship of expression parameter group and expression label is stored in advance, specifically can be used as under type is pre-
It first stores the corresponding relationship of expression parameter group and expression label: being directed to each sample human face region, the manual identified sample face
Human face expression in region, and record the expression label of the sample human face region;The sample is obtained by facial expression recognition interface
The corresponding expression parameter group of this human face region;The corresponding expression label for saving the sample human face region and the sample human face region pair
The expression parameter group answered obtains the corresponding relationship of pre-stored expression parameter group and expression label.
Wherein, expression label is for describing human face expression, for example, expression label can for " you are teasing me? ", " secret juice
Smile ", " extremely sad " etc., can describe the human face expression in human face region to be identified in this way, to make true to user one
Fixed face recognition result, rather than lineup's face expression parameter is provided for user.
Preferably, expression label may include Chinese, and English corresponding with the Chinese, can also include and the Chinese
Corresponding Japanese, Korean etc., here without limitation.It is this by multilingual describe human face expression in the way of, people can be increased
The interest of face Expression Recognition result improves user experience.
It should be noted that the executing subject of the embodiment of the present invention may be the server in application scenarios, at this point, with
Image is sent to server by user terminal 101 by family 10, executes step 201-203 by server, and by determining expression
Label returns to user terminal 101.
Using facial expression recognizing method provided in an embodiment of the present invention, pass through pre-stored expression parameter group and expression
The corresponding relationship of label determines the expression label of human face region to be identified, to describe the human face expression of human face region to be identified, from
And an exact facial expression recognition is provided for user as a result, meeting user needs an exact facial expression recognition
As a result demand brings better experience for user.
Preferably, human face region to be identified can be obtained in the following way:
Mode one: identifying the face in already present image, obtains human face region to be identified.
Wherein, already present image is that user has shot and stored image in the user terminal.
Mode two: the face in the image acquired in real time by camera is identified, human face region to be identified is obtained.
Wherein, after being opened by the image that camera acquires in real time for the camera on user terminal, user is not clicked on also
After acquired image before shooting button, i.e. user open the camera on user terminal, as long as the visual angle model of camera
There is human face region to be identified in enclosing, human face region to be identified can be obtained and carry out facial expression recognition.
In mode one and mode two, existing face recognition algorithms can be used, the face in image is identified, here
It is not detailed.
Mode is preferably carried out as one kind, after step 203, facial expression recognizing method provided in an embodiment of the present invention
Further include:
Step 204, the expression label of the human face region to be identified is shown in setting position.
In this step, the expression label of human face region to be identified is shown into the setting position in user terminal screen, this
When, display includes the image of the human face region to be identified in user terminal screen.Setting position can be set according to practical application scene
It is fixed, do not block the position of the human face region to be identified preferably, for example setting position is not have human face region to be identified in image
Position.
In the specific implementation, the human face region to be identified in piece image may include one, may comprising multiple, to
In the case where identifying that human face region is one, the expression label of the human face region to be identified is shown in setting position.To
In the case where identifying that human face region is multiple, the expression label of the human face region to be identified can be shown in the following way
Setting position:
The expression label for choosing human face region to be identified in an intermediate position is shown in setting position, alternatively, choosing
The expression label of a clearest human face region to be identified is shown in setting position.
It wherein, acquiescently, can be by human face region to be identified in an intermediate position when human face region to be identified is multiple
Expression label be shown in setting position, or the expression label of a clearest human face region to be identified is shown and is being set
Place is set in positioning;A clearest human face region to be identified is highest one face area to be identified of identification degree in image
Domain.
Can also be by the way of other expression labels for choosing a human face region to be identified, for example choose corresponding table
Feelings label is that a human face region to be identified of " happiness " is shown, or is randomly selected from multiple human face regions to be identified
The expression label of one human face region to be identified is shown.
In the case where human face region to be identified is multiple, facial expression recognition side provided in an embodiment of the present invention is utilized
Method identifies the expression label of each of piece image human face region to be identified and preservation.By the table of human face region to be identified
When feelings label is shown in setting position, the expression label for choosing one of them human face region to be identified is shown in setting position
Place, and indicate in the corresponding human face region of expression label that setting position is shown, for example the rectangle frame of different colours can be used
Distinguish the corresponding human face region to be identified of expression label currently shown, that is, each human face region to be identified is selected with rectangle circle
Out, the rectangle frame of the corresponding human face region to be identified of the expression label currently shown is red, other human face regions to be identified
Rectangle frame be white, color can freely be set according to practical application scene, here without limitation, as long as the expression currently shown
The rectangle frame color of the corresponding human face region of label and the rectangle frame color of other human face regions to be identified are different.For example,
The corresponding human face region of the expression label currently shown can also be indicated with arrow.
In the case where human face region to be identified is multiple, the expression label for choosing one of them human face region to be identified is aobvious
Show after setting position, it is preferable that the content provided using Fig. 3, the face area to be identified that switching setting position is shown
The expression label in domain:
Step 301, when determining that user chooses other any human face regions to be identified, other described any people to be identified are obtained
The expression label in face region.
Step 302, the expression label of other any human face regions to be identified is shown in setting position.
Wherein, other any human face regions to be identified are that setting position currently shows the corresponding people to be identified of expression label
Any human face region to be identified except face region.
In the specific implementation, it when user chooses other any human face regions to be identified, obtains and aobvious in setting position
The expression label for showing other any human face regions to be identified, can be convenient in this way user switch setting position show wait know
The expression label of other human face region, so that user checks the expression label of each human face region to be identified as needed.
Certainly, while showing expression label, corresponding expression parameter and its value can also be shown.For example, in expression
In parameter and its value, Overlapping display expression label.
In the specific implementation, the content that Fig. 4 offer can be used, determines the expression label of the human face region to be identified:
Step 401, for pre-stored each expression parameter group, each expression parameter in the expression parameter group is calculated
Square of the difference of the value of corresponding expression parameter in expression parameter group corresponding with the human face region to be identified of the determination, obtains
Corresponding square of expression parameter group set.
In this step, for each expression parameter in the corresponding relationship of pre-stored expression parameter group and expression label
Group calculates in each expression parameter expression parameter group corresponding with the human face region to be identified determined in the expression parameter group
Square of the difference of the value of corresponding expression parameter.Such as: the expression parameter group is (a, b, c), wherein a, b, c are respectively expression ginseng
Several values, the corresponding expression parameter group of human face region to be identified determined are (a ', b ', c '), wherein a ', b ', c ' are respectively table
The value of feelings parameter, the then each expression parameter calculated in the expression parameter group are corresponding with the human face region to be identified of the determination
Square of the difference of the value of correspondence expression parameter in expression parameter group are as follows: (a-a')2、(b-b')2、(c-c')2, the obtained table
Corresponding square of collection of feelings parameter group is combined into { (a-a')2、(b-b')2、(c-c')2, it should be noted that in the specific implementation, obtain
To square set in element be specific value.
Step 402, calculate corresponding square of the expression parameter group set in each element and value, as the expression parameter group
The similarity of expression parameter group corresponding with the human face region to be identified of the determination.
Wherein, corresponding square of the expression parameter group set in each element and be worth it is bigger, the expression parameter group with it is described
The similarity of the corresponding expression parameter group of determining human face region to be identified is smaller, conversely, corresponding square of the expression parameter group
In set each element and be worth smaller, expression parameter group expression parameter group corresponding with the human face region to be identified of the determination
Similarity it is bigger.
Continue to use the example above, calculate corresponding square of the expression parameter group set in each element and value are as follows: (a-a')2+
(b-b')2+(c-c')2。
Step 403, it determines in the pre-stored expression parameter group, it is corresponding with the human face region to be identified of the determination
Expression parameter group the maximum expression parameter group of similarity.
In this step, by pre-stored expression parameter group, each element is the smallest with value in corresponding square of set
Expression parameter group, the maximum expression ginseng of similarity as expression parameter group corresponding with the human face region to be identified of the determination
Array.
Step 404, the corresponding relationship of expression parameter group and expression label according to the pre-stored data, it is determining and the determination
The corresponding expression label of the maximum expression parameter group of similarity of the corresponding expression parameter group of human face region to be identified.
Step 405, by the maximum table of similarity of expression parameter group corresponding with the human face region to be identified of the determination
The corresponding expression label of feelings parameter group is determined as the corresponding table of the corresponding expression parameter group of human face region to be identified of the determination
Feelings label.
According to the embodiment that Fig. 4 is provided, the corresponding table of the corresponding expression parameter group of the human face region to be identified determined
Feelings label is more accurate.The way of example that Fig. 4 is provided is only that one kind is preferably carried out mode, and other modes can also be used and determine
The corresponding expression label of the corresponding expression parameter group of human face region to be identified, here without limitation, such as: it can also be from being stored in advance
Expression parameter group in, search and the identical most table of parameter in the corresponding expression parameter group of human face region to be identified of determination
Feelings parameter group, and using the corresponding expression label of expression parameter group found as the determining corresponding table of human face region to be identified
The corresponding expression label of feelings parameter group.
Example devices
After describing the facial expression recognizing method of exemplary embodiment of the invention, next, being described with reference to Fig. 5
The facial expression recognition equipment of exemplary embodiment of the invention.
Fig. 5 is a kind of structural schematic diagram of facial expression recognition equipment provided in an embodiment of the present invention, as shown in figure 5, can
To include following module:
Module 501 is obtained, for obtaining human face region to be identified;
Parameter determination module 502, for determining the corresponding expression parameter group of the human face region to be identified;
Expression determining module 503, for the corresponding relationship of expression parameter group and expression label according to the pre-stored data, and
The corresponding expression parameter group of the human face region to be identified of the determination, determines the expression label of the human face region to be identified,
In, the expression label is for describing human face expression.
Preferably, the acquisition module 501 is specifically used for:
Face in already existing image is identified, human face region to be identified is obtained.
Preferably, the acquisition module 501 is specifically used for:
Face in the image acquired in real time by camera is identified, human face region to be identified is obtained.
In some embodiments of the present embodiment, optionally, the facial expression recognition equipment further include:
Display module 504, for showing the expression label of the human face region to be identified in setting position.
Preferably, the display module 504 is specifically used for:
In the case where human face region to be identified is multiple, the expression of human face region to be identified in an intermediate position is chosen
Label is shown in setting position, alternatively, the expression label for choosing a clearest human face region to be identified is shown in setting
At position.
Preferably, the display module 504 is also used to:
When determining that user chooses other any human face regions to be identified, other any human face regions to be identified are obtained
Expression label;The expression label of other any human face regions to be identified is shown in setting position.
Preferably, the expression determining module 503, comprising:
First computing unit 5031 calculates in the expression parameter group for being directed to pre-stored each expression parameter group
Each expression parameter expression parameter group corresponding with the human face region to be identified of the determination in corresponding expression parameter value
Difference square, obtain corresponding square of the expression parameter group set;
Second computing unit 5032, for calculate corresponding square of the expression parameter group set in each element and value, work
For the similarity of expression parameter group expression parameter group corresponding with the human face region to be identified of the determination;
First determination unit 5033, for determining in the pre-stored expression parameter group, with the determination wait know
The maximum expression parameter group of similarity of the corresponding expression parameter group of other human face region;
Second determination unit 5034, for the corresponding relationship of expression parameter group and expression label according to the pre-stored data, really
The corresponding table of the maximum expression parameter group of similarity of fixed expression parameter group corresponding with the human face region to be identified of the determination
Feelings label;
Third determination unit 5035, for by the phase of expression parameter group corresponding with the human face region to be identified of the determination
Like the corresponding expression label of maximum expression parameter group is spent, it is determined as the corresponding expression ginseng of human face region to be identified of the determination
The corresponding expression label of array.
Preferably, the expression label includes Chinese and English corresponding with the Chinese.
It should be noted that although being referred to several units of facial expression recognition equipment in the above detailed description, this
Kind division is only exemplary not enforceable.In fact, embodiment according to the present invention, above-described two or
More multiunit feature and function can embody in a unit.Conversely, the feature and function of an above-described unit
It can be able to be to be embodied by multiple units with further division.
In addition, although describing the operation of the method for the present invention in the accompanying drawings with particular order, this do not require that or
Hint must execute these operations in this particular order, or have to carry out shown in whole operation be just able to achieve it is desired
As a result.Additionally or alternatively, it is convenient to omit multiple steps are merged into a step and executed by certain steps, and/or by one
Step is decomposed into execution of multiple steps.
Although detailed description of the preferred embodimentsthe spirit and principles of the present invention are described by reference to several, it should be appreciated that, this
It is not limited to the specific embodiments disclosed for invention, does not also mean that the feature in these aspects cannot to the division of various aspects
Combination is benefited to carry out, this to divide the convenience merely to statement.The present invention is directed to cover appended claims spirit and
Included various modifications and equivalent arrangements in range.
Claims (12)
1. a kind of facial expression recognizing method, comprising:
Obtain human face region to be identified;
Determine the corresponding expression parameter group of the human face region to be identified, the expression parameter in the expression parameter group includes described
Face to be identified in human face region to be identified is mapped to the component value in all kinds of default expressions;
The corresponding relationship of expression parameter group and expression label according to the pre-stored data and the human face region to be identified of the determination
Corresponding expression parameter group determines the expression label of the human face region to be identified, wherein the expression label is for describing people
Face expression;
The expression label of the human face region to be identified is shown in setting position;
Determine the expression label of the human face region to be identified, comprising:
For pre-stored each expression parameter group, each expression parameter and the determination in the expression parameter group are calculated
Square of the difference of the value of correspondence expression parameter in the corresponding expression parameter group of human face region to be identified, obtains the expression parameter group
Corresponding square of set;
Calculate each element in corresponding square of expression parameter group set and value, as the expression parameter group and the determination
The similarity of the corresponding expression parameter group of human face region to be identified;
It determines in the pre-stored expression parameter group, expression parameter group corresponding with the human face region to be identified of the determination
The maximum expression parameter group of similarity;
The corresponding relationship of expression parameter group and expression label according to the pre-stored data, the determining face area to be identified with the determination
The corresponding expression label of the maximum expression parameter group of similarity of the corresponding expression parameter group in domain;
The maximum expression parameter group of similarity of expression parameter group corresponding with the human face region to be identified of the determination is corresponding
Expression label, be determined as the corresponding expression label of the corresponding expression parameter group of human face region to be identified of the determination.
2. according to the method described in claim 1, wherein, obtaining human face region to be identified, comprising:
Face in already existing image is identified, human face region to be identified is obtained.
3. according to the method described in claim 1, wherein, obtaining human face region to be identified, comprising:
Face in the image acquired in real time by camera is identified, human face region to be identified is obtained.
4. according to the method described in claim 1, wherein, the expression label of the human face region to be identified is shown in setting position
Set place, comprising:
In the case where human face region to be identified is multiple, the expression label of human face region to be identified in an intermediate position is chosen
It is shown in setting position, alternatively, the expression label for choosing a clearest human face region to be identified is shown in setting position
Place.
5. according to the method described in claim 4, further include:
When determining that user chooses other any human face regions to be identified, the expression of other any human face regions to be identified is obtained
Label;
The expression label of other any human face regions to be identified is shown in setting position.
6. according to the method described in claim 1, wherein, the expression label includes Chinese and English corresponding with the Chinese
Text.
7. a kind of facial expression recognition equipment, comprising:
Module is obtained, for obtaining human face region to be identified;
Parameter determination module, for determining the corresponding expression parameter group of the human face region to be identified, in the expression parameter group
Expression parameter include component value that face to be identified in the human face region to be identified is mapped in all kinds of default expressions;
Expression determining module, for expression parameter group according to the pre-stored data and expression label corresponding relationship and it is described really
The corresponding expression parameter group of fixed human face region to be identified, determines the expression label of the human face region to be identified, wherein described
Expression label is for describing human face expression;
Display module, for showing the expression label of the human face region to be identified in setting position;
The expression determining module, comprising:
First computing unit calculates each table in the expression parameter group for being directed to pre-stored each expression parameter group
The difference of the value of corresponding expression parameter in feelings parameter expression parameter group corresponding with the human face region to be identified of the determination is put down
Side obtains corresponding square of expression parameter group set;
Second computing unit, for calculate corresponding square of the expression parameter group set in each element and value, as the expression
The similarity of parameter group expression parameter group corresponding with the human face region to be identified of the determination;
First determination unit, the face area to be identified for determining in the pre-stored expression parameter group, with the determination
The maximum expression parameter group of similarity of the corresponding expression parameter group in domain;
Second determination unit, for the corresponding relationship of expression parameter group and expression label according to the pre-stored data, it is determining with it is described
The corresponding expression label of the maximum expression parameter group of similarity of the corresponding expression parameter group of determining human face region to be identified;
Third determination unit, for the similarity of expression parameter group corresponding with the human face region to be identified of the determination is maximum
The corresponding expression label of expression parameter group, the corresponding expression parameter group of human face region to be identified for being determined as the determination is corresponding
Expression label.
8. equipment according to claim 7, wherein the acquisition module is specifically used for:
Face in already existing image is identified, human face region to be identified is obtained.
9. equipment according to claim 7, wherein the acquisition module is specifically used for:
Face in the image acquired in real time by camera is identified, human face region to be identified is obtained.
10. equipment according to claim 7, wherein the display module is specifically used for:
In the case where human face region to be identified is multiple, the expression label of human face region to be identified in an intermediate position is chosen
It is shown in setting position, alternatively, the expression label for choosing a clearest human face region to be identified is shown in setting position
Place.
11. equipment according to claim 10, wherein the display module is also used to:
When determining that user chooses other any human face regions to be identified, the expression of other any human face regions to be identified is obtained
Label;The expression label of other any human face regions to be identified is shown in setting position.
12. equipment according to claim 7, wherein the expression label includes Chinese and corresponding with the Chinese
English.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610631259.5A CN106228145B (en) | 2016-08-04 | 2016-08-04 | A kind of facial expression recognizing method and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610631259.5A CN106228145B (en) | 2016-08-04 | 2016-08-04 | A kind of facial expression recognizing method and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106228145A CN106228145A (en) | 2016-12-14 |
CN106228145B true CN106228145B (en) | 2019-09-03 |
Family
ID=57546894
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610631259.5A Active CN106228145B (en) | 2016-08-04 | 2016-08-04 | A kind of facial expression recognizing method and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106228145B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107219917A (en) * | 2017-04-28 | 2017-09-29 | 北京百度网讯科技有限公司 | Emoticon generation method and device, computer equipment and computer-readable recording medium |
CN107633203A (en) * | 2017-08-17 | 2018-01-26 | 平安科技(深圳)有限公司 | Facial emotions recognition methods, device and storage medium |
CN108942919B (en) * | 2018-05-28 | 2021-03-30 | 北京光年无限科技有限公司 | Interaction method and system based on virtual human |
CN108875633B (en) * | 2018-06-19 | 2022-02-08 | 北京旷视科技有限公司 | Expression detection and expression driving method, device and system and storage medium |
CN111079472A (en) * | 2018-10-19 | 2020-04-28 | 北京微播视界科技有限公司 | Image comparison method and device |
CN109948426A (en) * | 2019-01-23 | 2019-06-28 | 深圳壹账通智能科技有限公司 | Application program method of adjustment, device, electronic equipment and storage medium |
CN109753950B (en) * | 2019-02-11 | 2020-08-04 | 河北工业大学 | Dynamic facial expression recognition method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1602620A (en) * | 2001-12-11 | 2005-03-30 | 皇家飞利浦电子股份有限公司 | Mood based virtual photo album |
CN103258204A (en) * | 2012-02-21 | 2013-08-21 | 中国科学院心理研究所 | Automatic micro-expression recognition method based on Gabor features and edge orientation histogram (EOH) features |
CN103530313A (en) * | 2013-07-08 | 2014-01-22 | 北京百纳威尔科技有限公司 | Searching method and device of application information |
CN104063683A (en) * | 2014-06-06 | 2014-09-24 | 北京搜狗科技发展有限公司 | Expression input method and device based on face identification |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007148872A (en) * | 2005-11-29 | 2007-06-14 | Mitsubishi Electric Corp | Image authentication apparatus |
-
2016
- 2016-08-04 CN CN201610631259.5A patent/CN106228145B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1602620A (en) * | 2001-12-11 | 2005-03-30 | 皇家飞利浦电子股份有限公司 | Mood based virtual photo album |
CN103258204A (en) * | 2012-02-21 | 2013-08-21 | 中国科学院心理研究所 | Automatic micro-expression recognition method based on Gabor features and edge orientation histogram (EOH) features |
CN103530313A (en) * | 2013-07-08 | 2014-01-22 | 北京百纳威尔科技有限公司 | Searching method and device of application information |
CN104063683A (en) * | 2014-06-06 | 2014-09-24 | 北京搜狗科技发展有限公司 | Expression input method and device based on face identification |
Also Published As
Publication number | Publication date |
---|---|
CN106228145A (en) | 2016-12-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106228145B (en) | A kind of facial expression recognizing method and equipment | |
Wang et al. | A deep network solution for attention and aesthetics aware photo cropping | |
US10372226B2 (en) | Visual language for human computer interfaces | |
Deng et al. | Image aesthetic assessment: An experimental survey | |
US11270099B2 (en) | Method and apparatus for generating facial feature | |
CN107844744A (en) | With reference to the face identification method, device and storage medium of depth information | |
KR20130099317A (en) | System for implementing interactive augmented reality and method for the same | |
Doman et al. | Video CooKing: Towards the synthesis of multimedia cooking recipes | |
CN111401318B (en) | Action recognition method and device | |
CN106203286A (en) | The content acquisition method of a kind of augmented reality, device and mobile terminal | |
US9922241B2 (en) | Gesture recognition method, an apparatus and a computer program for the same | |
WO2018076484A1 (en) | Method for tracking pinched fingertips based on video | |
CN106778627A (en) | Detect method, device and the mobile terminal of face face value | |
KR101344851B1 (en) | Device and Method for Processing Image | |
Liu et al. | Attack-Agnostic Deep Face Anti-Spoofing | |
Patel | Point Pattern Matching algorithm for recognition of 36 ASL gestures | |
CN107357424B (en) | Gesture operation recognition method and device and computer readable storage medium | |
CN113743160A (en) | Method, apparatus and storage medium for biopsy | |
JP2017204280A (en) | Method, system and apparatus for selecting video frame | |
Saman et al. | Image Processing Algorithm for Appearance-Based Gesture Recognition | |
CN107479725B (en) | Character input method and device, virtual keyboard, electronic equipment and storage medium | |
AU2015258346A1 (en) | Method and system of transitioning between images | |
Nguyen et al. | Hand posture recognition using Kernel Descriptor | |
CN109840948A (en) | The put-on method and device of target object based on augmented reality | |
CN115836319A (en) | Image processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |