CN106257489A - Expression recognition method and system - Google Patents

Expression recognition method and system Download PDF

Info

Publication number
CN106257489A
CN106257489A CN201610547743.XA CN201610547743A CN106257489A CN 106257489 A CN106257489 A CN 106257489A CN 201610547743 A CN201610547743 A CN 201610547743A CN 106257489 A CN106257489 A CN 106257489A
Authority
CN
China
Prior art keywords
expression
facial image
subimage
score value
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610547743.XA
Other languages
Chinese (zh)
Inventor
公绪超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
LeTV Holding Beijing Co Ltd
LeTV Cloud Computing Co Ltd
Original Assignee
LeTV Holding Beijing Co Ltd
LeTV Cloud Computing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by LeTV Holding Beijing Co Ltd, LeTV Cloud Computing Co Ltd filed Critical LeTV Holding Beijing Co Ltd
Priority to CN201610547743.XA priority Critical patent/CN106257489A/en
Publication of CN106257489A publication Critical patent/CN106257489A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/174Facial expression recognition
    • G06V40/175Static expression
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The open a kind of expression recognition method of embodiments of the invention, including: by facial image to be identified input to global depth convolutional neural networks to determine the first expression and the first score value of facial image, global depth convolutional neural networks carries out degree of depth study based on the reference facial image with different expression and obtains;Facial image is divided into multiple subimage;The characteristic information of multiple subimages and fixed reference feature information are compared to determine the second expression and the second score value of facial image, and fixed reference feature information is determined with reference to subimage by the multiple of reference facial image with different expression;The expression of facial image is determined according to the first expression and the first score value and the second expression and the second score value;Also provide for a kind of Expression Recognition system accordingly;The method and system of the embodiment of the present invention can identify human face expression more quickly and accurately.

Description

Expression recognition method and system
Technical field
The present invention relates to image identification technical field, particularly to a kind of expression recognition method and system.
Background technology
Human facial expression recognition based on image has important value at man-machine interaction, the smart field such as human body emotion judgment.
Prior art is used for identify the method for facial expression to be only limitted to the most static picture and carries out Expression Recognition, And the positive face image that the picture also needing to be identified to complete Expression Recognition accurately is face, if picture to be identified Being to capture in people's active procedure, the picture very likely obtained is empty, it is unclear that, or due to ambient light etc. because of Element, the picture quality obtained is difficult to distinguish, and captures side face and anon-normal face figure that the picture arrived is probably people Picture, at this moment cannot identify the facial expression of people accurately, thus have a strong impact on human facial expression recognition in real life Application.
Therefore, it is badly in need of one to can adapt to any human face posture scene (human face posture scene at least includes positive face, left side Face, right side face, left tiltedly side, right tiltedly side, bow, face upward head) identify the face of people with outside image scene (fuzzy, high light, dark) The method of expression.
Summary of the invention
The embodiment of the present invention provides a kind of expression recognition method and system, at least solve above-mentioned technical problem it One.
On the one hand, the embodiment of the present invention provides a kind of expression recognition method, comprising:
By facial image to be identified input to global depth convolutional neural networks to determine the first table of described facial image Feelings and the first score value, described global depth convolutional neural networks carries out the degree of depth based on the reference facial image with different expression Acquistion is arrived;
Described facial image is divided into multiple subimage;
The characteristic information of the plurality of subimage and fixed reference feature information are compared to determine described facial image Second expression and the second score value, described fixed reference feature information is by multiple reference subgraphs of the reference facial image with different expression As determining;
Described face is determined with the first score value and described second expression with the second score value according at least to described first expression The expression of image.
On the other hand, the embodiment of the present invention also provides for a kind of Expression Recognition system, comprising:
Overall situation expression identification module, for inputting facial image to be identified to global depth convolutional neural networks to determine First expression and the first score value of described facial image, described global depth convolutional neural networks is based on the ginseng with different expression Examine facial image carry out the degree of depth study obtain;
Facial image divides module, for described facial image is divided into multiple subimage;
Locally Expression Recognition module, for comparing the characteristic information of the plurality of subimage with fixed reference feature information To determine the second expression and the second score value of described facial image, described fixed reference feature information is by the reference man with different expression The multiple of face image determine with reference to subimage;
Target expression determine module, for according at least to described first expression and the first score value and described second expression and Second score value determines the expression of described facial image.
In the expression recognition method of the embodiment of the present invention and system, the most overall consideration facial image determines face figure As possible first expression and the first score value that facial image is the first expression, compensate in scenes such as fuzzy, high light, dark The shortcoming that lower local recognition effect is the most bad;On the other hand by the method that facial image is divided into multiple subimage, carry out The judging of local feature determines possible the second expression of facial image and the second score value that facial image is the second expression, more Mend the defect that overall recognition effect is bad under the human face postures such as side face, oblique side;Consider the first expression determined afterwards Determining the current expression of facial image with the first score value and the second expression with the second score value, the face of the embodiment of the present invention is known Other method and system take full advantage of the characteristic of face overall situation and partial situation, it determines accuracy rate is high, and can adapt to multiple face The multiple image scene such as attitude and fuzzy, high light, dark.
Accompanying drawing explanation
In order to be illustrated more clearly that the technical scheme of the embodiment of the present invention, required use in embodiment being described below Accompanying drawing be briefly described, it should be apparent that, below describe in accompanying drawing be some embodiments of the present invention, for ability From the point of view of the those of ordinary skill of territory, on the premise of not paying creative work, it is also possible to obtain the attached of other according to these accompanying drawings Figure.
Fig. 1 is the flow chart of expression recognition method one embodiment of the present invention;
Fig. 2 is the flow chart of another embodiment of expression recognition method of the present invention;
Fig. 3 is the flow chart of the another embodiment of expression recognition method of the present invention;
Fig. 4 is the theory diagram of Expression Recognition system one embodiment of the present invention;
Fig. 5 is the theory diagram that the facial image in the Expression Recognition of the present invention divides module one embodiment;
Fig. 6 is the theory diagram of local Expression Recognition module one embodiment in the Expression Recognition of the present invention;
Fig. 7 is the structural representation of an embodiment of the subscriber equipment of the present invention.
Specific embodiment
For making the purpose of the embodiment of the present invention, technical scheme and advantage clearer, below in conjunction with the embodiment of the present invention In accompanying drawing, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is The a part of embodiment of the present invention rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art The every other embodiment obtained under not making creative work premise, broadly falls into the scope of protection of the invention.
It should be noted that in the case of not conflicting, the embodiment in the application and the feature in embodiment can phases Combination mutually.
The present invention can be used in numerous general or special purpose computing system environment or configuration.Such as: personal computer, service Device computer, handheld device or portable set, laptop device, multicomputer system, system based on microprocessor, top set Box, programmable consumer-elcetronics devices, network PC, minicomputer, mainframe computer, include any of the above system or equipment Distributed computing environment etc..
The present invention can be described in the general context of computer executable instructions, such as program Module.Usually, program module includes performing particular task or realizing the routine of particular abstract data type, program, object, group Part, data structure etc..The present invention can also be put into practice in a distributed computing environment, in these distributed computing environment, by The remote processing devices connected by communication network performs task.In a distributed computing environment, program module is permissible It is positioned in the local and remote computer-readable storage medium of storage device.
In the present invention, " assembly ", " device ", " system " etc. refer to be applied to the related entities of computer, such as hardware, hard Part and the combination of software, software or executory software etc..In detail, such as, assembly can but be not limited to run on place The reason process of device, processor, object, can executive module, perform thread, program and/or computer.Further, server is run on On application program or shell script, server can be assembly.One or more assemblies can be at the process performed and/or line Cheng Zhong, and assembly can localize and/or be distributed between two or multiple stage computer on one computer, it is possible to by Various computer-readable mediums run.Assembly can also be according to having the signal of one or more packet, such as, from one With another component interaction in local system, distributed system, and/or the network in the Internet handed over by signal and other system The signal of mutual data is communicated by locally and/or remotely process.
Finally, in addition it is also necessary to explanation, in this article, the relational terms of such as first and second or the like be used merely to by One entity or operation separate with another entity or operating space, and not necessarily require or imply these entities or operation Between exist any this reality relation or order.And, term " includes ", " comprising ", not only includes those key elements, and And also include other key elements being not expressly set out, or also include intrinsic for this process, method, article or equipment Key element.In the case of there is no more restriction, statement " including ... " key element limited, it is not excluded that including described wanting Process, method, article or the equipment of element there is also other identical element.
Human facial expression recognition based on image has important value at man-machine interaction, the smart field such as human body emotion judgment. There is a lot of Expression Recognition research based on image at present, but do not consider overall and local binding characteristic, therefore fit Answering scene limited, such as, under the human face postures such as side face, oblique side, overall identification is the most bad, in fields such as fuzzy, high light, dark Under scape, local recognition effect is the most bad.
For adapt to as far as possible multiple human face posture scene and outside image scene we by face global image and eyes, The topography such as face, eyebrow differentiates respectively, carries out overall situation expression with degree of depth convolutional neural networks and differentiates and local feature Extract, finally carry out result fusion with support vector machine classification method, finally obtain the expression judged result with human face posture. The method can adapt to take full advantage of the characteristic of face overall situation and partial situation, it determines accuracy rate is high, and can adapt to multiple people The multiple image scene such as face attitude and fuzzy, high light, dark.
As it is shown in figure 1, the expression recognition method of one embodiment of the invention, comprising:
S11, by facial image to be identified input to global depth convolutional neural networks to determine the of described facial image One expression and the first score value, described global depth convolutional neural networks is carried out based on the reference facial image with different expression deeply Degree study obtains;
S12, described facial image is divided into multiple subimage;
S13, the characteristic information of the plurality of subimage and fixed reference feature information are compared to determine described face figure Second expression and the second score value of picture, described fixed reference feature information is by multiple references of the reference facial image with different expression Subimage determines;
S14, determine described according at least to described first expression and the first score value and described second expression and the second score value The expression of facial image.
In the expression recognition method of the embodiment of the present invention, the most overall consideration facial image determines that facial image may The first expression and the first score value that facial image is the first expression, compensate under the scenes such as fuzzy, high light, dark locally The shortcoming that recognition effect is the most bad;On the other hand by the method that facial image is divided into multiple subimage, local is carried out special Judging of levying determines possible the second expression of facial image and the second score value that facial image is the second expression, compensate for The defect that under the human face postures such as side face, oblique side, overall recognition effect is bad;Consider the first expression and first determined afterwards Score value and the second expression and the second score value determine the current expression of facial image, the face identification method of the embodiment of the present invention And system adapts to take full advantage of the characteristic of face overall situation and partial situation, it determines accuracy rate is high, and can adapt to multiple face appearance The multiple image scene such as state and fuzzy, high light, dark.
In the present embodiment, in step S11, facial image to be identified can be the photo (example of the facial image directly inputted As, directly obtained by photographic head), or it is contaminated with the picture of other background employing HAAR Face datection from input Method obtains;Global depth convolutional neural networks is in advance based on degree of depth study and is formed, its input having not for magnanimity With the facial image of expression, it is output as the expression that facial image is had and the probability institute for facial image with this expression The score value beaten, therefore, directly inputs global depth convolutional neural networks i.e. by the facial image obtained after obtaining facial image Can obtain expression that this facial image is likely to be of and by having the score value that this expression is beaten, the expression of facial image is at least wrapped Include: one or more in happy, angry, frightened, sad, surprised, sad, gentle.
In the present embodiment, step S14 can be directly to compare the first score value and the size of the second score value, and determines numerical value The bigger corresponding expression that expression is facial image;Or (Support Vector Machine supports vector to use SVM Machine) the first score value and the second score value merge to determine the current expression of facial image by method.
SVM belongs to the category of data classification, and data classification is an important exercise question in data mining.Data classification refers to On the basis of the training data classified, according to certain principle, form a grader through training;Then classification is used Device judges the classification not having the data of classification.
Support vector machine is a kind of sorting technique based on classification boundaries, has different table by substantial amounts of in the present embodiment First score value of the facial image of feelings and the second score value and corresponding expression are trained, and finally determine SVM classifier, Its ultimate principle is: the point that training data the first score value and the second score value are distributed across on two dimensional surface, and they are classified according to it (expression classification) is gathered in different regions, determines classification boundaries by the substantial amounts of facial image with different expression, thus Determine lower SVM classifier;When having the first score value of facial image to be identified and the second score value input SVM classifier, according to the One score value and the second score value fallen classification boundaries region corresponding to classification determine what facial image to be identified was had Expression.
The advantage using SVM method is, when the first score value and the second score value are close to each other, but the most corresponding different During expression, directly determine that the expression that expression is facial image corresponding to the score value that numerical value is bigger may exist the feelings of erroneous judgement Condition, such as, the expression corresponding to the first score value is for smiling, and the expression corresponding to the second score value is sad, and the first score value Value is 0.89, and the second score value value is 0.9, and from the point of view of the value of first, second score value, human face expression is to smile and grieved Score value is the highest, if directly wearing a rueful expression corresponding to the second score value being defined as the expression of facial image, pole has can Can be the judgement of mistake, the method therefore using SVM method to merge first, second score value determines, SVM method is melted The SVM classifier closed is that the degree of depth study according to the substantial amounts of facial image with different expression determines, thus ensure that The accuracy of whole result of determination.
In certain embodiments, facial image input was also being included acquisition before global depth convolutional neural networks Facial image be corrected, in order to identify the expression that had of facial image more accurately, such as, by detection face In image right and left eyes whether judge on same horizontal line current face's image the need of being corrected, when right and left eyes not Time in the same horizontal line, whole facial image is driven to carry out rotating to realize the correction to facial image.
As in figure 2 it is shown, in certain embodiments, described facial image is divided into multiple subimage and includes by step S12:
S21, described facial image is carried out key point location;
Described facial image is divided into multiple subimage by S22, the key point of described facial image according to location.
The present embodiment uses SDM method carry out facial image key point location, and be partitioned into partial zones according to key point Territory;Such as, according to the key point position obtained, mark off left eye eyeball, right eye eyeball, left eyebrow, right eyebrow, face Deng Wuge district Territory, and obtain the image in these five regions as subimage.
As it is shown on figure 3, in certain embodiments, the characteristic information of the plurality of subimage is stored in expressive features storehouse Fixed reference feature information be compared to determine described facial image second expression and the second score value include:
S31, respectively by the plurality of subimage input to partial-depth convolutional neural networks;
S32, obtain the spy corresponding to the plurality of subimage from the full articulamentum of described partial-depth convolutional neural networks Reference ceases;
S33, the characteristic information calculating the plurality of subimage and the fixed reference feature information of storage in described expressive features storehouse Similarity value;
S34, determine that the maximum expression corresponding to fixed reference feature information of Similarity value is described second expression;
S35, determine described second expression Similarity value be described second score value.
In the present embodiment, partial-depth convolutional neural networks is the subgraph of the facial image based on magnanimity with different expression Determine as carrying out degree of depth study, and fixed reference feature information is by inputting to partial-depth convolution god with reference to facial image After network, determining from the eigenvalue of the full articulamentum acquisition of partial-depth convolutional neural networks, such as, a pair has smile The subimage in five regions such as the left eye eyeball of reference facial image of expression, right eye eyeball, left eyebrow, right eyebrow, face is the most defeated Enter partial-depth convolutional neural networks, can be distinguished successively from the full articulamentum of partial-depth convolutional neural networks accordingly Corresponding to left eye eyeball, right eye eyeball, left eyebrow, right eyebrow, five eigenvalues of five subimages of face, and by these five features Being worth according to preset order generation eigenvalue cluster as fixed reference feature information, this expression corresponding to fixed reference feature information is exactly micro- Laugh at;In expressive features storehouse, the fixed reference feature information of storage includes the fixed reference feature letter for the facial image with different expression Breath, and include the fixed reference feature information with the different facial images of identical expression.
When there is a need to the facial image being identified expression, the five of same five regions obtaining facial image to be identified Individual subimage, and obtain five subimages are inputted to partial-depth convolutional neural networks to obtain five eigenvalues, by five Individual eigenvalue generates the characteristic information of facial image to be identified according to preset order, then uses associating bayesian algorithm to count successively Calculate the similarity between the fixed reference feature information of storage in the characteristic information of facial image to be identified and expressive features storehouse, to determine The expression of facial image to be identified.
Such as, for having the facial image to be identified of expression of smiling.First left eye eyeball, right eye eyeball, left eyebrow, the right side are obtained Eyebrow, five subimages of face.The most respectively five subimage inputs are obtained correspondence to partial-depth convolutional neural networks Five eigenvalues, with preset order (such as, just according to left eye eyeball, right eye eyeball, left eyebrow, right eyebrow and the order of face) by five Individual eigenvalue is stored as the array characteristic information as facial image to be identified.Use associating bayesian algorithm to calculate to wait to know Similarity value between the fixed reference feature information stored in the characteristic information of other facial image and expressive features storehouse.
In expressive features storehouse, the fixed reference feature information of storage is based on 10,000 10,000 obtained with reference to facial image Individual fixed reference feature information, carries out arrangement from big to small to Similarity value after calculating all of similarity, take front 20 similar Angle value (or taking the Similarity value all of Similarity value more than a certain threshold value).The most further add up these 20 similar Expression corresponding to angle value, and determine most table that expression is facial image to be identified corresponding in these 20 Similarity value Feelings.
20 Similarity value the most from big to small be respectively as follows: 0.95 (smiling corresponding to expression), 0.94 (corresponding to expression Smile), 0.92 (smiling corresponding to expression), 0.92 (smiling corresponding to expression), 0.91 (smiling corresponding to expression), 0.91 (right Should smile in expression), 0.91 (smiling corresponding to expression), 0.90 (smiling corresponding to expression), 0.90 (corresponding to have a serious mien), 0.88 (smiling corresponding to expression), 0.88 (laughing corresponding to expression), 0.86 (smiling corresponding to expression), 0.85 (corresponding to table Feelings smile), 0.85 (corresponding to expression smile), 0.84 (corresponding to expression laugh), 0.84 (corresponding to expression smile), 0.84 (smiling corresponding to expression), 0.82 (laughing corresponding to expression), 0.81 (laughing at corresponding to having a serious mien), 0.80 (corresponding to expression Smile).
Adding up and understand, have 15 to smile corresponding to expression in above-mentioned 20 Similarity value, 3 correspond to laugh, and two right The expression smiled for facial image to be identified should be the most i.e. can determine that in strictly.Add up 15 corresponding to the similar of smile of expressing one's feelings The arithmetic mean of instantaneous value of angle value is 0.89853, and the probability that expression is smile of facial image the most to be identified is 89.853.Or pass through Getting rid of a maximum probability value 0.95 and a probability minimum 0.80, the arithmetic mean of instantaneous value adding up remaining 13 probits is 0.88692, the probability that expression is smile of facial image the most to be identified is 89.853.
In some embodiments of expression recognition method of the present invention, also include:
Relative position relation according to the plurality of subimage determines the human face posture of described facial image, described face appearance State at least include positive face, left side face, right side face, left tiltedly side, right tiltedly side, bow, face upward in head one or more;Such as, when Detect in facial image only left eye time, then can be determined that the attitude for current face's image is left side face, double when detecting When distance between eye place horizontal line and face place horizontal line is less than general threshold value, then judge the attitude of current face's image For bowing.
In certain embodiments, express one's feelings and second point according at least to described first expression and the first score value and described second Value determines that the expression of described facial image is: combine the current pose of the facial image determined according to the first score value and the second score value Merged by SVM method.In the present embodiment, the current pose of facial image can be used for determining the first score value and the second score value Weight, such as, when the current pose of facial image is positive face or new line or when bowing, be the weight of the first score value distribution More than be second score value distribution weight (because, when facial image is in positive face or new line or the attitude bowed, according to The expression that the expression that overall facial image determines determines with respect to the local feature of facial image is more accurate);Work as face When the current pose of image is side face, then be first score value distribution weight less than be second score value distribution weight (because, when When facial image is in the attitude of side face, the expression determined according to the local feature of facial image is with respect to overall face figure Expression as determining is more accurate).
The embodiment of the present invention can be passed through hardware processor (hardware processor) and realize correlation function mould Block.
It should be noted that for aforesaid each method embodiment, in order to be briefly described, therefore it is all expressed as a series of Action merge, but those skilled in the art should know, the present invention is not limited by described sequence of movement because According to the present invention, some step can use other orders or carry out simultaneously.Secondly, those skilled in the art also should know Knowing, embodiment described in this description belongs to preferred embodiment, involved action and the module not necessarily present invention Necessary.
In the above-described embodiments, the description to each embodiment all emphasizes particularly on different fields, and does not has the portion described in detail in certain embodiment Point, may refer to the associated description of other embodiments.
As shown in Figure 4, the embodiment of the present invention also provides for a kind of Expression Recognition system, comprising:
Overall situation expression identification module, for inputting facial image to be identified to global depth convolutional neural networks to determine First expression and the first score value of described facial image, described global depth convolutional neural networks is based on the ginseng with different expression Examine facial image carry out the degree of depth study obtain;
Facial image divides module, for described facial image is divided into multiple subimage;
Locally Expression Recognition module, for by the characteristic information of the plurality of subimage and the ginseng of storage in expressive features storehouse Examining characteristic information and be compared to determine the second expression and the second score value of described facial image, described fixed reference feature information is by having The multiple of reference facial image having different expression determine with reference to subimage;
Target expression determine module, for according at least to described first expression and the first score value and described second expression and Second score value determines the expression of described facial image.
In the Expression Recognition system of the embodiment of the present invention, the most overall consideration facial image determines that facial image may The first expression and the first score value that facial image is the first expression, compensate under the scenes such as fuzzy, high light, dark locally The shortcoming that recognition effect is the most bad;On the other hand by the method that facial image is divided into multiple subimage, local is carried out special Judging of levying determines possible the second expression of facial image and the second score value that facial image is the second expression, compensate for The defect that under the human face postures such as side face, oblique side, overall recognition effect is bad;Consider the first expression and first determined afterwards Score value and the second expression and the second score value determine the current expression of facial image, the face identification method of the embodiment of the present invention And system adapts to take full advantage of the characteristic of face overall situation and partial situation, it determines accuracy rate is high, and can adapt to multiple face appearance The multiple image scene such as state and fuzzy, high light, dark.
In the Expression Recognition system of the embodiment of the present invention, also include:
Image correction module, for described by facial image input to before global depth convolutional neural networks to described Facial image is corrected processing.
As it is shown in figure 5, in certain embodiments, facial image divides module and includes:
Key point positioning unit, for carrying out key point location to described facial image;
Subimage division unit, described facial image is divided into by the key point for the described facial image according to location Multiple subimages.
As shown in Figure 6, in certain embodiments, locally Expression Recognition module includes:
Subimage processing unit, for inputting the plurality of subimage to partial-depth convolutional neural networks respectively;
Characteristic acquisition unit, for obtaining corresponding to institute from the full articulamentum of described partial-depth convolutional neural networks State the characteristic information of multiple subimage;
Similarity calculated, stores in described expressive features storehouse for calculating the characteristic information of the plurality of subimage The Similarity value of fixed reference feature information;
Second expression determines unit, is described for determining the expression corresponding to fixed reference feature information of Similarity value maximum Second expression;
Second score value determines unit, is described second score value for determining the Similarity value of described second expression.
In the expression recognition system of the embodiment of the present invention, also include:
Human face posture determines module, for determining described facial image according to the relative position relation of the plurality of subimage Human face posture;Described human face posture at least includes positive face, left side face, right side face, left tiltedly side, right tiltedly side, bows, faces upward in head One or more.
The Expression Recognition system of the invention described above embodiment can be used for performing the expression recognition side of the embodiment of the present invention Method, and the technique effect that the facial expression recognizing method reaching the invention described above embodiment accordingly is reached, the most superfluous State.
On the other hand, the embodiment of the present invention is also disclosed a kind of subscriber equipment, and this subscriber equipment includes:
Memorizer, is used for depositing computer-managed instruction;
Processor, for performing the computer-managed instruction of described memorizer storage, to perform:
By facial image to be identified input to global depth convolutional neural networks to determine the first table of described facial image Feelings and the first score value, described global depth convolutional neural networks carries out the degree of depth based on the reference facial image with different expression Acquistion is arrived;
Described facial image is divided into multiple subimage;
The fixed reference feature information of storage in the characteristic information of the plurality of subimage and expressive features storehouse is compared with Determining the second expression and the second score value of described facial image, described fixed reference feature information is by the reference face with different expression The multiple of image determine with reference to subimage;
Described face is determined with the first score value and described second expression with the second score value according at least to described first expression The expression of image.
As it is shown in fig. 7, be the structural representation of subscriber equipment one embodiment in the above embodiment of the present invention, the application is concrete Implementing of subscriber equipment 700 is not limited by embodiment, comprising:
Processor (processor) 710, communication interface (Communications Interface) 720, memorizer (memory) 730 and communication bus 740.Wherein:
Processor 710, communication interface 720 and memorizer 730 complete mutual communication by communication bus 740.
Communication interface 720, for the net element communication with such as third party's access end etc..
Processor 710, is used for the program that performs 732, specifically can perform the correlation step in said method embodiment.
Specifically, program 732 can include that program code, described program code include computer-managed instruction.
Processor 710 is probably a central processor CPU, or specific integrated circuit ASIC (Application Specific Integrated Circuit), or it is configured to implement the one or more integrated electricity of the embodiment of the present application Road.
Embodiment of the method described above is only schematically, and the wherein said unit illustrated as separating component can To be or to may not be physically separate, the parts shown as unit can be or may not be physics list Unit, i.e. may be located at a place, or can also be distributed on multiple NE.Can be selected it according to the actual needs In some or all of module realize the purpose of the present embodiment scheme.Those of ordinary skill in the art are not paying creativeness Work in the case of, be i.e. appreciated that and implement.
By the description of above embodiment, those skilled in the art is it can be understood that can be by each embodiment Software adds the mode of required general hardware platform and realizes, naturally it is also possible to pass through hardware.Based on such understanding, above-mentioned skill The part that prior art is contributed by art scheme the most in other words can embody with the form of software product, this calculating Machine software product can store in a computer-readable storage medium, such as ROM/RAM, magnetic disc, CD etc., uses including some instructions So that computer equipment (can be personal computer, server, or the network equipment etc.) perform each embodiment or The method described in some part of person's embodiment.
Those skilled in the art are it should be appreciated that embodiments of the invention can be provided as method, system or computer program Product.Therefore, the reality in terms of the present invention can use complete hardware embodiment, complete software implementation or combine software and hardware Execute the form of example.And, the present invention can use at one or more computers wherein including computer usable program code The shape of the upper computer program implemented of usable storage medium (including but not limited to disk memory and optical memory etc.) Formula.
The present invention is with reference to method, equipment (system) and the flow process of computer program according to embodiments of the present invention Figure and/or block diagram describe.It should be understood that can the most first-class by computer program instructions flowchart and/or block diagram Flow process in journey and/or square frame and flow chart and/or block diagram and/or the combination of square frame.These computer programs can be provided Instruction arrives the processor of general purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device to produce A raw machine so that the instruction performed by the processor of computer or other programmable data processing device is produced for real The device of the function specified in one flow process of flow chart or multiple flow process and/or one square frame of block diagram or multiple square frame now.
These computer program instructions may be alternatively stored in and computer or other programmable data processing device can be guided with spy Determine in the computer-readable memory that mode works so that the instruction being stored in this computer-readable memory produces and includes referring to Make the manufacture of device, this command device realize at one flow process of flow chart or multiple flow process and/or one square frame of block diagram or The function specified in multiple square frames.These computer program instructions also can be loaded into computer or other programmable datas process and set It is standby upper so that on computer or other programmable devices, execution sequence of operations step is to produce computer implemented process, Thus the instruction performed on computer or other programmable devices provides for realizing at one flow process of flow chart or multiple stream The step of the function specified in journey and/or one square frame of block diagram or multiple square frame.
Last it is noted that above example is only in order to illustrate technical scheme, it is not intended to limit;Although With reference to previous embodiment, the present invention is described in detail, it will be understood by those within the art that: it still may be used So that the technical scheme described in foregoing embodiments to be modified, or wherein portion of techniques feature is carried out equivalent; And these amendment or replace, do not make appropriate technical solution essence depart from various embodiments of the present invention technical scheme spirit and Scope.

Claims (10)

1. an expression recognition method, including:
By facial image input to be identified to global depth convolutional neural networks to determine the first expression of described facial image With the first score value, described global depth convolutional neural networks carries out degree of depth study based on the reference facial image with different expression Obtain;
Described facial image is divided into multiple subimage;
The characteristic information of the plurality of subimage and fixed reference feature information are compared to determine the second of described facial image Expression and the second score value, described fixed reference feature information is true by multiple reference subimages of the reference facial image with different expression Fixed;
Described facial image is determined with the first score value and described second expression with the second score value according at least to described first expression Expression.
Method the most according to claim 1, wherein, described be divided into multiple subimage by described facial image and include:
Described facial image is carried out key point location;
Described facial image is divided into multiple subimage by the key point of the described facial image according to location.
Method the most according to claim 2, wherein, the described characteristic information by the plurality of subimage is special with described reference Reference breath is compared to determine the second expression of described facial image and includes with the second score value:
Respectively the plurality of subimage is inputted to partial-depth convolutional neural networks;
The characteristic information corresponding to the plurality of subimage is obtained from the full articulamentum of described partial-depth convolutional neural networks;
Calculate the characteristic information of the plurality of subimage and the Similarity value of described fixed reference feature information;
Determine that the maximum expression corresponding to fixed reference feature information of Similarity value is described second expression;
Determine that the described second Similarity value expressed one's feelings is described second score value.
The most according to the method in claim 2 or 3, wherein, also include:
Relative position relation according to the plurality of subimage determines the human face posture of described facial image.
Method the most according to claim 4, wherein, inputs facial image to global depth convolutional neural networks described The most also include:
It is corrected described facial image processing.
6. an Expression Recognition system, including:
Overall situation expression identification module, for described to determine by facial image to be identified input to global depth convolutional neural networks First expression and the first score value of facial image, described global depth convolutional neural networks is based on the reference man with different expression Face image carries out degree of depth study and obtains;
Facial image divides module, for described facial image is divided into multiple subimage;
Locally Expression Recognition module, for comparing the characteristic information of the plurality of subimage and fixed reference feature information with really Second expression and the second score value of fixed described facial image, described fixed reference feature information is by the reference face figure with different expression The multiple of picture determine with reference to subimage;
Target expression determines module, for according at least to described first expression and the first score value and described second expression and second Score value determines the expression of described facial image.
System the most according to claim 6, wherein, described facial image divides module and includes:
Key point positioning unit, for carrying out key point location to described facial image;
Subimage division unit, described facial image is divided into multiple by the key point for the described facial image according to location Subimage.
System the most according to claim 7, wherein, described local Expression Recognition module includes:
Subimage processing unit, for inputting the plurality of subimage to partial-depth convolutional neural networks respectively;
Characteristic acquisition unit, for obtaining corresponding to described many from the full articulamentum of described partial-depth convolutional neural networks The characteristic information of individual subimage;
Similarity calculated, for calculating the characteristic information of the plurality of subimage and the similarity of described fixed reference feature information Value;
Second expression determines unit, is described second for determining the expression corresponding to fixed reference feature information of Similarity value maximum Expression;
Second score value determines unit, is described second score value for determining the Similarity value of described second expression.
9. according to the system described in claim 7 or 8, wherein, also include:
Human face posture determines module, for determining the people of described facial image according to the relative position relation of the plurality of subimage Face attitude.
System the most according to claim 9, wherein, also includes:
Image correction module, for described by facial image input to before global depth convolutional neural networks to described face Correct image processes.
CN201610547743.XA 2016-07-12 2016-07-12 Expression recognition method and system Pending CN106257489A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610547743.XA CN106257489A (en) 2016-07-12 2016-07-12 Expression recognition method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610547743.XA CN106257489A (en) 2016-07-12 2016-07-12 Expression recognition method and system

Publications (1)

Publication Number Publication Date
CN106257489A true CN106257489A (en) 2016-12-28

Family

ID=57713720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610547743.XA Pending CN106257489A (en) 2016-07-12 2016-07-12 Expression recognition method and system

Country Status (1)

Country Link
CN (1) CN106257489A (en)

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106775360A (en) * 2017-01-20 2017-05-31 珠海格力电器股份有限公司 The control method of a kind of electronic equipment, system and electronic equipment
CN106845459A (en) * 2017-03-07 2017-06-13 佛山市融信通企业咨询服务有限公司 A kind of intelligent hospital self-help registration method
CN106875534A (en) * 2017-03-07 2017-06-20 佛山市融信通企业咨询服务有限公司 A kind of intelligent hospital self-help hospital registration system
CN106934906A (en) * 2017-03-07 2017-07-07 佛山市融信通企业咨询服务有限公司 A kind of Hospital register system of the medical priority level of automatic decision
CN107392151A (en) * 2017-07-21 2017-11-24 竹间智能科技(上海)有限公司 Face image various dimensions emotion judgement system and method based on neutral net
CN107844766A (en) * 2017-10-31 2018-03-27 北京小米移动软件有限公司 Acquisition methods, device and the equipment of facial image fuzziness
CN107895146A (en) * 2017-11-01 2018-04-10 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device, system and computer-readable recording medium
CN107958230A (en) * 2017-12-22 2018-04-24 中国科学院深圳先进技术研究院 Facial expression recognizing method and device
CN108062533A (en) * 2017-12-28 2018-05-22 北京达佳互联信息技术有限公司 Analytic method, system and the mobile terminal of user's limb action
CN108090473A (en) * 2018-01-12 2018-05-29 北京陌上花科技有限公司 The method and device of polyphaser human face identification
CN108921941A (en) * 2018-07-10 2018-11-30 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108921061A (en) * 2018-06-20 2018-11-30 腾讯科技(深圳)有限公司 A kind of expression recognition method, device and equipment
CN109522945A (en) * 2018-10-31 2019-03-26 中国科学院深圳先进技术研究院 One kind of groups emotion identification method, device, smart machine and storage medium
CN109598262A (en) * 2019-02-11 2019-04-09 华侨大学 A kind of children's facial expression recognizing method
CN109685611A (en) * 2018-12-15 2019-04-26 深圳壹账通智能科技有限公司 A kind of Products Show method, apparatus, computer equipment and storage medium
CN109829362A (en) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 Safety check aided analysis method, device, computer equipment and storage medium
CN109922266A (en) * 2019-03-29 2019-06-21 睿魔智能科技(深圳)有限公司 Grasp shoot method and system, video camera and storage medium applied to video capture
CN109934173A (en) * 2019-03-14 2019-06-25 腾讯科技(深圳)有限公司 Expression recognition method, device and electronic equipment
CN109960986A (en) * 2017-12-25 2019-07-02 北京市商汤科技开发有限公司 Human face posture analysis method, device, equipment, storage medium and program
CN110852220A (en) * 2019-10-30 2020-02-28 深圳智慧林网络科技有限公司 Intelligent recognition method of facial expression, terminal and computer readable storage medium
CN111079472A (en) * 2018-10-19 2020-04-28 北京微播视界科技有限公司 Image comparison method and device
WO2020125216A1 (en) * 2018-12-18 2020-06-25 深圳云天励飞技术有限公司 Pedestrian re-identification method, device, electronic device and computer-readable storage medium
CN110321872B (en) * 2019-07-11 2021-03-16 京东方科技集团股份有限公司 Facial expression recognition method and device, computer equipment and readable storage medium
CN113128403A (en) * 2021-04-19 2021-07-16 深圳市上源艺术设计有限公司 Intelligent exhibit display method and system based on data analysis
US11244151B2 (en) 2019-01-10 2022-02-08 Boe Technology Group Co., Ltd. Computer-implemented method of recognizing facial expression, apparatus for recognizing facial expression, method of pre-training apparatus for recognizing facial expression, computer-program product for recognizing facial expression
CN114612987A (en) * 2022-03-17 2022-06-10 深圳集智数字科技有限公司 Expression recognition method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100322507A1 (en) * 2009-06-22 2010-12-23 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for detecting drowsy facial expressions of vehicle drivers under changing illumination conditions
CN102024141A (en) * 2010-06-29 2011-04-20 上海大学 Face recognition method based on Gabor wavelet transform and local binary pattern (LBP) optimization
CN103984919A (en) * 2014-04-24 2014-08-13 上海优思通信科技有限公司 Facial expression recognition method based on rough set and mixed features
CN104463172A (en) * 2014-12-09 2015-03-25 中国科学院重庆绿色智能技术研究院 Face feature extraction method based on face feature point shape drive depth model
CN105095827A (en) * 2014-04-18 2015-11-25 汉王科技股份有限公司 Facial expression recognition device and facial expression recognition method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100322507A1 (en) * 2009-06-22 2010-12-23 Toyota Motor Engineering & Manufacturing North America, Inc. System and method for detecting drowsy facial expressions of vehicle drivers under changing illumination conditions
CN102024141A (en) * 2010-06-29 2011-04-20 上海大学 Face recognition method based on Gabor wavelet transform and local binary pattern (LBP) optimization
CN105095827A (en) * 2014-04-18 2015-11-25 汉王科技股份有限公司 Facial expression recognition device and facial expression recognition method
CN103984919A (en) * 2014-04-24 2014-08-13 上海优思通信科技有限公司 Facial expression recognition method based on rough set and mixed features
CN104463172A (en) * 2014-12-09 2015-03-25 中国科学院重庆绿色智能技术研究院 Face feature extraction method based on face feature point shape drive depth model

Cited By (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106775360A (en) * 2017-01-20 2017-05-31 珠海格力电器股份有限公司 The control method of a kind of electronic equipment, system and electronic equipment
CN106775360B (en) * 2017-01-20 2018-11-30 珠海格力电器股份有限公司 Control method, system and the electronic equipment of a kind of electronic equipment
CN106845459A (en) * 2017-03-07 2017-06-13 佛山市融信通企业咨询服务有限公司 A kind of intelligent hospital self-help registration method
CN106875534A (en) * 2017-03-07 2017-06-20 佛山市融信通企业咨询服务有限公司 A kind of intelligent hospital self-help hospital registration system
CN106934906A (en) * 2017-03-07 2017-07-07 佛山市融信通企业咨询服务有限公司 A kind of Hospital register system of the medical priority level of automatic decision
CN107392151A (en) * 2017-07-21 2017-11-24 竹间智能科技(上海)有限公司 Face image various dimensions emotion judgement system and method based on neutral net
CN107844766A (en) * 2017-10-31 2018-03-27 北京小米移动软件有限公司 Acquisition methods, device and the equipment of facial image fuzziness
CN107895146A (en) * 2017-11-01 2018-04-10 深圳市科迈爱康科技有限公司 Micro- expression recognition method, device, system and computer-readable recording medium
WO2019085495A1 (en) * 2017-11-01 2019-05-09 深圳市科迈爱康科技有限公司 Micro-expression recognition method, apparatus and system, and computer-readable storage medium
CN107958230A (en) * 2017-12-22 2018-04-24 中国科学院深圳先进技术研究院 Facial expression recognizing method and device
CN107958230B (en) * 2017-12-22 2020-06-23 中国科学院深圳先进技术研究院 Facial expression recognition method and device
US11341769B2 (en) 2017-12-25 2022-05-24 Beijing Sensetime Technology Development Co., Ltd. Face pose analysis method, electronic device, and storage medium
CN109960986A (en) * 2017-12-25 2019-07-02 北京市商汤科技开发有限公司 Human face posture analysis method, device, equipment, storage medium and program
CN108062533A (en) * 2017-12-28 2018-05-22 北京达佳互联信息技术有限公司 Analytic method, system and the mobile terminal of user's limb action
CN108090473A (en) * 2018-01-12 2018-05-29 北京陌上花科技有限公司 The method and device of polyphaser human face identification
CN108921061A (en) * 2018-06-20 2018-11-30 腾讯科技(深圳)有限公司 A kind of expression recognition method, device and equipment
CN108921941A (en) * 2018-07-10 2018-11-30 Oppo广东移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN111079472A (en) * 2018-10-19 2020-04-28 北京微播视界科技有限公司 Image comparison method and device
CN109522945A (en) * 2018-10-31 2019-03-26 中国科学院深圳先进技术研究院 One kind of groups emotion identification method, device, smart machine and storage medium
CN109685611A (en) * 2018-12-15 2019-04-26 深圳壹账通智能科技有限公司 A kind of Products Show method, apparatus, computer equipment and storage medium
WO2020125216A1 (en) * 2018-12-18 2020-06-25 深圳云天励飞技术有限公司 Pedestrian re-identification method, device, electronic device and computer-readable storage medium
CN109829362A (en) * 2018-12-18 2019-05-31 深圳壹账通智能科技有限公司 Safety check aided analysis method, device, computer equipment and storage medium
US11244151B2 (en) 2019-01-10 2022-02-08 Boe Technology Group Co., Ltd. Computer-implemented method of recognizing facial expression, apparatus for recognizing facial expression, method of pre-training apparatus for recognizing facial expression, computer-program product for recognizing facial expression
CN109598262A (en) * 2019-02-11 2019-04-09 华侨大学 A kind of children's facial expression recognizing method
CN109934173A (en) * 2019-03-14 2019-06-25 腾讯科技(深圳)有限公司 Expression recognition method, device and electronic equipment
CN109934173B (en) * 2019-03-14 2023-11-21 腾讯科技(深圳)有限公司 Expression recognition method and device and electronic equipment
CN109922266B (en) * 2019-03-29 2021-04-06 睿魔智能科技(深圳)有限公司 Snapshot method and system applied to video shooting, camera and storage medium
CN109922266A (en) * 2019-03-29 2019-06-21 睿魔智能科技(深圳)有限公司 Grasp shoot method and system, video camera and storage medium applied to video capture
CN110321872B (en) * 2019-07-11 2021-03-16 京东方科技集团股份有限公司 Facial expression recognition method and device, computer equipment and readable storage medium
US11281895B2 (en) 2019-07-11 2022-03-22 Boe Technology Group Co., Ltd. Expression recognition method, computer device, and computer-readable storage medium
CN110852220A (en) * 2019-10-30 2020-02-28 深圳智慧林网络科技有限公司 Intelligent recognition method of facial expression, terminal and computer readable storage medium
CN110852220B (en) * 2019-10-30 2023-08-18 深圳智慧林网络科技有限公司 Intelligent facial expression recognition method, terminal and computer readable storage medium
CN113128403A (en) * 2021-04-19 2021-07-16 深圳市上源艺术设计有限公司 Intelligent exhibit display method and system based on data analysis
CN114612987A (en) * 2022-03-17 2022-06-10 深圳集智数字科技有限公司 Expression recognition method and device

Similar Documents

Publication Publication Date Title
CN106257489A (en) Expression recognition method and system
US10346676B2 (en) Face detection, representation, and recognition
JP5010905B2 (en) Face recognition device
KR101381439B1 (en) Face recognition apparatus, and face recognition method
US20180211104A1 (en) Method and device for target tracking
JP6398979B2 (en) Video processing apparatus, video processing method, and video processing program
CN110223322B (en) Image recognition method and device, computer equipment and storage medium
CN110580445A (en) Face key point detection method based on GIoU and weighted NMS improvement
JP5766564B2 (en) Face authentication apparatus and face authentication method
CN106203387A (en) Face verification method and system
CN109685037B (en) Real-time action recognition method and device and electronic equipment
KR101612605B1 (en) Method for extracting face feature and apparatus for perforimg the method
CN107844742B (en) Facial image glasses minimizing technology, device and storage medium
CN111062328B (en) Image processing method and device and intelligent robot
CN103136516A (en) Face recognition method and system fusing visible light and near-infrared information
WO2020195732A1 (en) Image processing device, image processing method, and recording medium in which program is stored
US20220157078A1 (en) Adaptive learning and matching of face modalities
US11048926B2 (en) Adaptive hand tracking and gesture recognition using face-shoulder feature coordinate transforms
Pathak et al. A framework for dynamic hand gesture recognition using key frames extraction
CN109447000A (en) Biopsy method, spot detection method, electronic equipment and recording medium
JP7426922B2 (en) Program, device, and method for artificially generating a new teacher image with an attachment worn on a person's face
Reddy et al. Comparison of HOG and fisherfaces based face recognition system using MATLAB
Amrutha et al. Bharatanatyam hand gesture recognition using normalized chain codes and oriented distances
CN111523406A (en) Deflection face correcting method based on generation of confrontation network improved structure
Hbali et al. Object detection based on HOG features: Faces and dual-eyes augmented reality

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20161228