CN106295566A - Facial expression recognizing method and device - Google Patents
Facial expression recognizing method and device Download PDFInfo
- Publication number
- CN106295566A CN106295566A CN201610653790.2A CN201610653790A CN106295566A CN 106295566 A CN106295566 A CN 106295566A CN 201610653790 A CN201610653790 A CN 201610653790A CN 106295566 A CN106295566 A CN 106295566A
- Authority
- CN
- China
- Prior art keywords
- key point
- expression recognition
- human face
- face region
- topography
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Abstract
The disclosure is directed to a kind of facial expression recognizing method and device, belong to image identification technical field.Described method includes: detect human face region from image to be identified;Obtain the key point in human face region;From human face region, topography is extracted according to key point;Topography is identified by the Expression Recognition model having used training, obtains Expression Recognition result.The Expression Recognition model that the disclosure completes training by employing carries out the feature extraction at face key point and expression differentiation, two separate steps is united two into one, thus reduces cumulative error, improves the accuracy of expression recognition.Further, owing to only extracting the characteristic information of the topography at face key point, rather than the global characteristics information of whole human face region is extracted, it is possible to extract the feature embodying emotional state the most accurately and efficiently, improve the accuracy of expression recognition further.
Description
Technical field
It relates to image identification technical field, particularly to a kind of facial expression recognizing method and device.
Background technology
Expression recognition refers to identify the emotional state determining face from given facial image.Such as, glad, sad
Wound, surprised, frightened, detest, angry etc..Expression recognition is widely used to science of psychology, neuroscience, work at present
The field such as Cheng Kexue and computer science.
In the related, expression recognition includes following two key steps: first, detects from image to be identified
Human face region, and from human face region, extract countenance feature, wherein, HOG (Histogram of Oriented can be used
Gradient, histograms of oriented gradients), the feature such as LBP (Local Binary Pattern, local binary patterns), Gabor carries
Take algorithm and extract countenance feature;Second, carry out expression classification based on countenance feature, obtain Expression Recognition result, its
In, sorting algorithm can use Adaboost algorithm, SVM (Support Vector Machine, support vector machine) algorithm, random
Forest algorithm etc..
Summary of the invention
Disclosure embodiment provides a kind of facial expression recognizing method and device.Described technical scheme is as follows:
First aspect according to disclosure embodiment, it is provided that a kind of facial expression recognizing method, described method includes:
Human face region is detected from image to be identified;
Obtain the key point in described human face region;
From described human face region, topography is extracted according to described key point;
Described topography is identified by the Expression Recognition model having used training, obtains Expression Recognition result.
Alternatively, described from described human face region, extract topography according to described key point, including:
Obtain the image block around each key point;
Each described image block it is overlapped by preset order or splices, obtaining described topography.
Alternatively, the image block around each key point of described acquisition, including:
For each key point, the image block of intercepting preliminary dimension centered by described key point.
Alternatively, described employing completes the Expression Recognition model of training and is identified described topography, is expressed one's feelings
Recognition result, including:
Use the characteristic information of topography described in the Expression Recognition model extraction of training;
Use described Expression Recognition model according to described characteristic information, determine described Expression Recognition result.
Alternatively, the key point in the described human face region of described acquisition, including:
Described human face region is zoomed to target size;
Described key point is positioned from the described human face region after scaling.
Alternatively, described Expression Recognition model is convolutional neural networks model.
Second aspect according to disclosure embodiment, it is provided that a kind of expression recognition device, described device includes:
Face detection module, is configured to from image to be identified detect human face region;
Key point acquisition module, is configured to obtain the key point in described human face region;
Image zooming-out module, is configured to extract topography from described human face region according to described key point;
Expression Recognition module, is configured to use the Expression Recognition model of training to know described topography
Not, Expression Recognition result is obtained.
Alternatively, described image zooming-out module, including:
Image block obtains submodule, is configured to obtain the image block around each key point;
Image block processes submodule, is configured to be overlapped by preset order by each described image block or splice,
To described topography.
Alternatively, described image block obtains submodule, is configured to, for each key point, intercept with described key point
Centered by the image block of preliminary dimension.
Alternatively, described Expression Recognition module, including:
Feature extraction submodule, is configured to use the spy of topography described in the Expression Recognition model extraction trained
Reference ceases;
Identification determines submodule, is configured to use described Expression Recognition model according to described characteristic information, determines described
Expression Recognition result.
Alternatively, described key point acquisition module, including:
Face scaling submodule, is configured to described human face region is zoomed to target size;
Key point locator module, is configured to position described key point from the described human face region after scaling.
Alternatively, described Expression Recognition model is convolutional neural networks model.
The third aspect according to disclosure embodiment, it is provided that a kind of expression recognition device, described device includes:
Processor;
For storing the memorizer of the executable instruction of described processor;
Wherein, described processor is configured to:
Human face region is detected from image to be identified;
Obtain the key point in described human face region;
From described human face region, topography is extracted according to described key point;
Described topography is identified by the Expression Recognition model having used training, obtains Expression Recognition result.
The technical scheme that disclosure embodiment provides can include following beneficial effect:
By detecting human face region from image to be identified, obtain the key point in human face region, according to key point from
Human face region extracts topography, and has used the Expression Recognition model of training that topography is identified, obtain table
Feelings recognition result;Solve in correlation technique owing to feature extraction and expression differentiation are two separate steps, need to adopt respectively
Process with two kinds of different algorithms, cause there is cumulative error, the problem affecting the precision of expression recognition;Use
The Expression Recognition model becoming training carries out the feature extraction at face key point and expression differentiates, two separate steps are closed two
It is one, thus reduces cumulative error, improve the accuracy of expression recognition.Further, owing to only extracting at face key point
The characteristic information of topography, rather than extract the global characteristics information of whole human face region, it is possible to extract the most accurately and efficiently
Embody the feature of emotional state, improve the accuracy of expression recognition further.
It should be appreciated that it is only exemplary and explanatory, not that above general description and details hereinafter describe
The disclosure can be limited.
Accompanying drawing explanation
Accompanying drawing herein is merged in description and constitutes the part of this specification, it is shown that meet the enforcement of the disclosure
Example, and for explaining the principle of the disclosure together with description.
Fig. 1 is the flow chart according to a kind of facial expression recognizing method shown in an exemplary embodiment;
Fig. 2 A is the flow chart according to a kind of facial expression recognizing method shown in another exemplary embodiment;
Fig. 2 B is the schematic diagram of the key point location that Fig. 2 A illustrated embodiment relates to;
Fig. 2 C is the structural representation of a kind of convolutional neural networks illustrated;
Fig. 3 is the block diagram according to a kind of expression recognition device shown in an exemplary embodiment;
Fig. 4 is the block diagram according to a kind of expression recognition device shown in another exemplary embodiment;
Fig. 5 is the block diagram according to a kind of device shown in an exemplary embodiment.
Detailed description of the invention
Here will illustrate exemplary embodiment in detail, its example represents in the accompanying drawings.Explained below relates to
During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represents same or analogous key element.Following exemplary embodiment
Described in embodiment do not represent all embodiments consistent with the disclosure.On the contrary, they are only with the most appended
The example of the apparatus and method that some aspects that described in detail in claims, the disclosure are consistent.
In the related, feature extraction and expression differentiation are two separate steps, need to be respectively adopted two kinds of differences
Algorithm process, there is cumulative error, affect the precision of expression recognition.The technical side that disclosure embodiment provides
Case, has used the Expression Recognition model of training to carry out the feature extraction at face key point and expression has differentiated, by two separately
Step unite two into one, thus reduce cumulative error, improve the accuracy of expression recognition.Further, owing to only extracting face
The characteristic information of the topography at key point, rather than extract the global characteristics information of whole human face region, it is possible to the most accurate,
Efficiently extract the feature embodying emotional state, improve the accuracy of expression recognition further.
The method that disclosure embodiment provides, the executive agent of each step can be that the electronics with image-capable sets
Standby, such as PC, smart mobile phone, panel computer, server etc..For the ease of describing, in following embodiment of the method, with
The executive agent of each step is that electronic equipment illustrates.
Fig. 1 is the flow chart according to a kind of facial expression recognizing method shown in an exemplary embodiment.The method is permissible
Including following several steps:
In a step 101, from image to be identified, human face region is detected.
In a step 102, the key point in human face region is obtained.
Key point is also referred to as characteristic point, face key point or human face characteristic point, refers to can embody in human face region expression
The face location of state, includes but not limited to eyes (such as canthus, eyeball center, eye tail), nose (such as nose, the wing of nose), face
The face location such as (such as the corners of the mouth, labial angle, lip), chin, eyebrow angle.
In step 103, from human face region, topography is extracted according to key point.
At step 104, used the Expression Recognition model of training that topography is identified, obtained Expression Recognition
Result.
In sum, the method that the present embodiment provides, solve in correlation technique owing to feature extraction and expression differentiation are
Two separate steps, need to be respectively adopted two kinds of different algorithms and process, cause there is cumulative error, affect face table
The problem of the precision of feelings identification.The Expression Recognition model having used training carries out the feature extraction at face key point and expression
Differentiate, two separate steps are united two into one, thus reduces cumulative error, improve the accuracy of expression recognition.Further,
Owing to only extracting the characteristic information of the topography at face key point, rather than extract the global characteristics letter of whole human face region
Breath, it is possible to extract the feature embodying emotional state the most accurately and efficiently, improve the accuracy of expression recognition further.
Fig. 2 A is the flow chart according to a kind of facial expression recognizing method shown in another exemplary embodiment.The method can
To include following several step:
In step 201, from image to be identified, human face region is detected.
Electronic equipment uses relevant Face datection algorithm to detect human face region from image to be identified.At the present embodiment
In, the concrete kind of Face datection algorithm is not construed as limiting.Such as, Face datection algorithm can be LBP algorithm and Adaboost
The combination of algorithm, uses LBP algorithm to extract characteristics of image from image to be identified, and use Adaboost cascade classifier according to
Characteristics of image determines human face region.Image to be identified can be the image including face.
In step 202., human face region is zoomed to target size.
Owing to the human face region size in different images differs, in order to ensure the accuracy of subsequent characteristics point location, will carry
The human face region taken zooms to the target size fixed.In the present embodiment, being not construed as limiting the size of target size, it can root
Preset according to practical situation.Such as, this target size is 96 × 96 (pixels).
In step 203, from the human face region after scaling, key point is positioned.
Key point is also referred to as characteristic point, face key point or human face characteristic point, refers to can embody in human face region expression
The face location of state, includes but not limited to eyes (such as canthus, eyeball center, eye tail), nose (such as nose, the wing of nose), face
The face location such as (such as the corners of the mouth, labial angle, lip), chin, eyebrow angle.
Electronic equipment uses relevant face key point location algorithm to position key point from the human face region after scaling.?
In the present embodiment, the concrete kind of face key point location algorithm is not construed as limiting.Such as, face key point location algorithm is permissible
It it is SDM (Supervised Descent Method supervises descent method) algorithm.
In one example, as shown in Figure 2 B, SDM algorithm location from the human face region 21 after scaling is used to obtain multiple
Key point, each key point is as shown in pore in figure.The position of key point to be positioned and quantity can preset, example
Such as 20.
In step 204, the image block around each key point is obtained.
Image block around key point refers to the described key point image block in interior preliminary dimension.An example
In, for each key point, the image block of intercepting preliminary dimension centered by key point.In the present embodiment, to predetermined
The size of size is not construed as limiting, and it can preset according to practical situation.Such as, this preliminary dimension is 32 × 32 (pixels).
Alternatively, before carrying out key point location, human face region can be converted to gray level image, fixed in gray level image
Position key point.Correspondingly, the image block around the key point of intercepting is also gray level image.
In step 205, each image block it is overlapped by preset order or splices, obtaining topography.
Before carrying out Expression Recognition, need to integrate each image block, input to knowledge of expressing one's feelings as an entirety
Other model.In a kind of possible embodiment, as Expression Recognition mould after each image block is overlapped by preset order
The input of type.In alternatively possible embodiment, know as expression after each image block is spliced by preset order
The input of other model.
The corresponding different face location of the most multiple and different key point of key point owing to obtaining, therefore each
Image block needs to be overlapped according to preset order or splice.Include that eyes, nose, face, eyebrow angle, chin are with key point
Example, preset order can be followed successively by eyes, nose, face, chin, eyebrow angle.That is, whether in the training of Expression Recognition model
Stage, or using Expression Recognition model to carry out the Expression Recognition stage, the multiple image blocks intercepted from arbitrary image are equal
It is overlapped according to same preset order or splices, unifying mutually with the structure of input data that this ensures Expression Recognition model.
In step 206, used the Expression Recognition model of training that topography is identified, obtained Expression Recognition
Result.
Electronic equipment has used the characteristic information of the Expression Recognition model extraction topography of training, uses Expression Recognition
Model, according to characteristic information, determines Expression Recognition result.In the present embodiment, feature extraction and expression differentiate by Expression Recognition
Model completes, it is not necessary to completed in two steps by two kinds of different algorithms, two separate steps is united two into one, thus reduces
Cumulative error, improves the accuracy of expression recognition.
In one example, Expression Recognition model be convolutional neural networks (Convolutional Neural Network,
CNN) model, or referred to as degree of depth convolutional neural networks model.Convolutional neural networks has powerful ability in feature extraction, utilizes volume
Long-pending neutral net carries out feature extraction and expression differentiates, accuracy is higher.Convolutional neural networks include an input layer, at least one
Individual convolutional layer, at least one full articulamentum and an output layer.Wherein, the input data of input layer be by each image block by
The topography obtained after sequence superposition or splicing;The output result of output layer is the vector of a length of n, represents n kind expression respectively
Probability, n is the integer more than 1.Convolutional layer is used for feature extraction.Full articulamentum carries out group for the feature extracting convolutional layer
Close and abstract, obtain being applicable to the data that output layer carries out classifying.
In conjunction with reference to Fig. 2 C, it illustrates the structural representation of a kind of convolutional neural networks.This convolutional Neural net
Network includes 1 input layer, 3 convolutional layers (first volume lamination C1, volume Two lamination C2 and the 3rd convolutional layer C3), 2 full connections
Layer (the full articulamentum FC5 of the first full articulamentum FC4 and second) and 1 output layer (Softmax layer).Assume to carry from human face region
Taking 20 key points, the size of the image block around each key point intercepted is 32 × 32 (pixels), the input of input layer
Data are superimposed image or the stitching image of 20 × 32 × 32.Wherein in 3 convolutional layers C1, C2 and C3, the number of convolution kernel is divided
It is not 36,64 and 32.The step-length of first volume lamination C1 is 2, and after first volume lamination C1 calculates, length and the width of image is all contracted to
/ 2nd originally, the step-length of volume Two lamination C2 and the 3rd convolutional layer C3 is 1.It should be noted that the volume shown in Fig. 2 C
Long-pending neutral net is only exemplary and explanatory, is not used to limit the disclosure.In general, the number of plies of convolutional neural networks
The most, effect the best but calculate the time also can be the longest, in actual applications, can in conjunction with the requirement to accuracy of identification and efficiency, if
Count the convolutional neural networks of the suitable number of plies.
In sum, the method that the present embodiment provides, use the Expression Recognition model of training to carry out face key point
Feature extraction and the expression at place differentiate, two separate steps are united two into one, thus reduce cumulative error, improve human face expression
The accuracy identified.Further, owing to only extracting the characteristic information of the topography at face key point, rather than whole face is extracted
The global characteristics information in region, it is possible to extract the feature embodying emotional state the most accurately and efficiently, improve face table further
The accuracy of feelings identification.
Following for disclosure device embodiment, may be used for performing method of disclosure embodiment.Real for disclosure device
Execute the details not disclosed in example, refer to method of disclosure embodiment.
Fig. 3 is the block diagram according to a kind of expression recognition device shown in an exemplary embodiment.This device has reality
The function of existing said method example, described function can be realized by hardware, it is also possible to performs corresponding software by hardware real
Existing.This device may include that face detection module 310, key point acquisition module 320, image zooming-out module 330 and Expression Recognition
Module 340.
Face detection module 310, is configured to from image to be identified detect human face region.
Key point acquisition module 320, is configured to obtain the key point in described human face region.
Image zooming-out module 330, is configured to extract topography from described human face region according to described key point.
Expression Recognition module 340, is configured to use the Expression Recognition model of training to carry out described topography
Identify, obtain Expression Recognition result.
In sum, the device that the present embodiment provides, use the Expression Recognition model of training to carry out face key point
Feature extraction and the expression at place differentiate, two separate steps are united two into one, thus reduce cumulative error, improve human face expression
The accuracy identified.Further, owing to only extracting the characteristic information of the topography at face key point, rather than whole face is extracted
The global characteristics information in region, it is possible to extract the feature embodying emotional state the most accurately and efficiently, improve face table further
The accuracy of feelings identification.
Fig. 4 is the block diagram according to a kind of expression recognition device shown in another exemplary embodiment.This device has
Realizing the function of said method example, described function can be realized by hardware, it is also possible to performs corresponding software by hardware
Realize.This device may include that face detection module 310, key point acquisition module 320, image zooming-out module 330 and expression are known
Other module 340.
Face detection module 310, is configured to from image to be identified detect human face region.
Key point acquisition module 320, is configured to obtain the key point in described human face region.
Image zooming-out module 330, is configured to extract topography from described human face region according to described key point.
Expression Recognition module 340, is configured to use the Expression Recognition model of training to carry out described topography
Identify, obtain Expression Recognition result.
In one example, described image zooming-out module 330, including: image block obtains at submodule 330a and image block
Reason submodule 330b.
Image block obtains submodule 330a, is configured to obtain the image block around each key point.
Image block processes submodule 330b, is configured to be overlapped by preset order by each described image block or spell
Connect, obtain described topography.
In one example, described image block obtains submodule 330b, is configured to for each key point, intercept with
The image block of the preliminary dimension centered by described key point.
In one example, described Expression Recognition module 340, including: feature extraction submodule 340a and identification determine son
Module 340b.
Feature extraction submodule 340a, is configured to use topography described in the Expression Recognition model extraction trained
Characteristic information.
Identification determines submodule 340b, is configured to use described Expression Recognition model according to described characteristic information, determines
Described Expression Recognition result.
In one example, described key point acquisition module 320, including: face scaling submodule 320a and key point are fixed
Bit submodule 320b.
Face scaling submodule 320a, is configured to described human face region is zoomed to target size.
Key point locator module 320b, is configured to position described key point from the described human face region after scaling.
In one example, described Expression Recognition model is convolutional neural networks model.
In sum, the device that the present embodiment provides, use the Expression Recognition model of training to carry out face key point
Feature extraction and the expression at place differentiate, two separate steps are united two into one, thus reduce cumulative error, improve human face expression
The accuracy identified.Further, owing to only extracting the characteristic information of the topography at face key point, rather than whole face is extracted
The global characteristics information in region, it is possible to extract the feature embodying emotional state the most accurately and efficiently, improve face table further
The accuracy of feelings identification.
It should be noted is that, the device that above-described embodiment provides is when realizing its function, only with each function above-mentioned
The division of module is illustrated, and in actual application, can distribute above-mentioned functions by different merits according to actual needs
Module can complete, the content structure of equipment will be divided into different functional modules, with complete described above all or portion
Divide function.
About the device in above-described embodiment, wherein modules performs the concrete mode of operation in relevant the method
Embodiment in be described in detail, explanation will be not set forth in detail herein.
The disclosure one exemplary embodiment additionally provides a kind of expression recognition device, it is possible to realize what the disclosure provided
Facial expression recognizing method.This device includes: processor, and for storing the memorizer of the executable instruction of processor.Its
In, processor is configured to:
Human face region is detected from image to be identified;
Obtain the key point in described human face region;
From described human face region, topography is extracted according to described key point;
Described topography is identified by the Expression Recognition model having used training, obtains Expression Recognition result.
Alternatively, described processor is configured to:
Obtain the image block around each key point;
Each described image block it is overlapped by preset order or splices, obtaining described topography.
Alternatively, described processor is configured to:
For each key point, the image block of intercepting preliminary dimension centered by described key point.
Alternatively, described processor is configured to:
Use the characteristic information of topography described in the Expression Recognition model extraction of training;
Use described Expression Recognition model according to described characteristic information, determine described Expression Recognition result.
Alternatively, described processor is configured to:
Described human face region is zoomed to target size;
Described key point is positioned from the described human face region after scaling.
Alternatively, described Expression Recognition model is convolutional neural networks model.
Fig. 5 is the block diagram according to a kind of device 500 shown in an exemplary embodiment.Such as, device 500 can be mobile
Phone, computer, digital broadcast terminal, messaging devices, game console, tablet device, armarium, body-building equipment,
Personal digital assistant etc..
With reference to Fig. 5, device 500 can include following one or more assembly: processes assembly 502, memorizer 504, power supply
Assembly 506, multimedia groupware 508, audio-frequency assembly 510, input/output (I/O) interface 512, sensor cluster 514, Yi Jitong
Letter assembly 516.
Process assembly 502 and generally control the integrated operation of device 500, such as with display, call, data communication, phase
The operation that machine operation and record operation are associated.Process assembly 502 and can include that one or more processor 520 performs to refer to
Order, to complete all or part of step of above-mentioned method.Additionally, process assembly 502 can include one or more module, just
Mutual in process between assembly 502 and other assemblies.Such as, process assembly 502 and can include multi-media module, many to facilitate
Media component 508 and process between assembly 502 mutual.
Memorizer 504 is configured to store various types of data to support the operation at device 500.Showing of these data
Example includes any application program for operating on device 500 or the instruction of method, contact data, telephone book data, disappears
Breath, picture, video etc..Memorizer 504 can be by any kind of volatibility or non-volatile memory device or their group
Close and realize, such as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), erasable compile
Journey read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash
Device, disk or CD.
The various assemblies that power supply module 506 is device 500 provide electric power.Power supply module 506 can include power management system
System, one or more power supplys, and other generate, manage and distribute, with for device 500, the assembly that electric power is associated.
The screen of one output interface of offer that multimedia groupware 508 is included between described device 500 and user.One
In a little embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen
Curtain may be implemented as touch screen, to receive the input signal from user.Touch panel includes one or more touch sensing
Device is with the gesture on sensing touch, slip and touch panel.Described touch sensor can not only sense touch or sliding action
Border, but also detect the persistent period relevant to described touch or slide and pressure.In certain embodiments, many matchmakers
Body assembly 508 includes a front-facing camera and/or post-positioned pick-up head.When device 500 is in operator scheme, such as screening-mode or
During video mode, front-facing camera and/or post-positioned pick-up head can receive the multi-medium data of outside.Each front-facing camera and
Post-positioned pick-up head can be a fixing optical lens system or have focal length and optical zoom ability.
Audio-frequency assembly 510 is configured to output and/or input audio signal.Such as, audio-frequency assembly 510 includes a Mike
Wind (MIC), when device 500 is in operator scheme, during such as call model, logging mode and speech recognition mode, mike is joined
It is set to receive external audio signal.The audio signal received can be further stored at memorizer 504 or via communication set
Part 516 sends.In certain embodiments, audio-frequency assembly 510 also includes a speaker, is used for exporting audio signal.
I/O interface 512 provides interface for processing between assembly 502 and peripheral interface module, above-mentioned peripheral interface module can
To be keyboard, put striking wheel, button etc..These buttons may include but be not limited to: home button, volume button, start button and lock
Set button.
Sensor cluster 514 includes one or more sensor, for providing the state of various aspects to comment for device 500
Estimate.Such as, what sensor cluster 514 can detect device 500 opens/closed mode, the relative localization of assembly, such as described
Assembly is display and the keypad of device 500, and sensor cluster 514 can also detect device 500 or 500 1 assemblies of device
Position change, the presence or absence that user contacts with device 500, device 500 orientation or acceleration/deceleration and device 500
Variations in temperature.Sensor cluster 514 can include proximity transducer, is configured to when not having any physical contact detect
The existence of neighbouring object.Sensor cluster 514 can also include optical sensor, such as CMOS or ccd image sensor, is used for becoming
Use as in application.In certain embodiments, this sensor cluster 514 can also include acceleration transducer, gyro sensors
Device, Magnetic Sensor, pressure transducer or temperature sensor.
Communications component 516 is configured to facilitate the communication of wired or wireless mode between device 500 and other equipment.Device
500 can access wireless network based on communication standard, such as Wi-Fi, 2G or 3G, or combinations thereof.An exemplary reality
Executing in example, communications component 516 receives the broadcast singal from external broadcasting management system or the relevant letter of broadcast via broadcast channel
Breath.In one exemplary embodiment, described communications component 516 also includes near-field communication (NFC) module, to promote that short distance is led to
Letter.Such as, can be based on RF identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB)
Technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 500 can be by one or more application specific integrated circuits (ASIC), numeral letter
Number processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components realize, be used for performing said method.
In the exemplary embodiment, a kind of non-transitory computer-readable recording medium including instruction, example are additionally provided
As included the memorizer 504 of instruction, above-mentioned instruction can have been performed said method by the processor 520 of device 500.Such as,
Described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk
With optical data storage devices etc..
A kind of non-transitory computer-readable recording medium, when the instruction in described storage medium is by the process of device 500
When device performs so that device 500 is able to carry out said method.
It should be appreciated that referenced herein " multiple " refer to two or more."and/or", describes association
The incidence relation of object, can there are three kinds of relations, such as, A and/or B, can represent in expression: individualism A, there is A simultaneously
And B, individualism B these three situation.Character "/" typicallys represent the forward-backward correlation relation to liking a kind of "or".
Those skilled in the art, after considering description and putting into practice invention disclosed herein, will readily occur to its of the disclosure
Its embodiment.The application is intended to any modification, purposes or the adaptations of the disclosure, these modification, purposes or
Person's adaptations is followed the general principle of the disclosure and includes the undocumented common knowledge in the art of the disclosure
Or conventional techniques means.Description and embodiments is considered only as exemplary, and the true scope of the disclosure and spirit are by following
Claim is pointed out.
It should be appreciated that the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and
And various modifications and changes can carried out without departing from the scope.The scope of the present disclosure is only limited by appended claim.
Claims (13)
1. a facial expression recognizing method, it is characterised in that described method includes:
Human face region is detected from image to be identified;
Obtain the key point in described human face region;
From described human face region, topography is extracted according to described key point;
Described topography is identified by the Expression Recognition model having used training, obtains Expression Recognition result.
Method the most according to claim 1, it is characterised in that described carry from described human face region according to described key point
Take topography, including:
Obtain the image block around each key point;
Each described image block it is overlapped by preset order or splices, obtaining described topography.
Method the most according to claim 2, it is characterised in that the image block around each key point of described acquisition, including:
For each key point, the image block of intercepting preliminary dimension centered by described key point.
Method the most according to claim 1, it is characterised in that described employing completes the Expression Recognition model of training to described
Topography is identified, and obtains Expression Recognition result, including:
Use the characteristic information of topography described in the Expression Recognition model extraction of training;
Use described Expression Recognition model according to described characteristic information, determine described Expression Recognition result.
Method the most according to claim 1, it is characterised in that the key point in the described human face region of described acquisition, including:
Described human face region is zoomed to target size;
Described key point is positioned from the described human face region after scaling.
6. according to the method described in any one of claim 1 to 5, it is characterised in that described Expression Recognition model is convolutional Neural
Network model.
7. an expression recognition device, it is characterised in that described device includes:
Face detection module, is configured to from image to be identified detect human face region;
Key point acquisition module, is configured to obtain the key point in described human face region;
Image zooming-out module, is configured to extract topography from described human face region according to described key point;
Expression Recognition module, is configured to use the Expression Recognition model of training to be identified described topography,
To Expression Recognition result.
Device the most according to claim 7, it is characterised in that described image zooming-out module, including:
Image block obtains submodule, is configured to obtain the image block around each key point;
Image block processes submodule, is configured to be overlapped by preset order by each described image block or splice, obtains institute
State topography.
Device the most according to claim 8, it is characterised in that
Described image block obtains submodule, is configured to for each key point, and intercept centered by described key point is pre-
The image block of sizing.
Device the most according to claim 7, it is characterised in that described Expression Recognition module, including:
Feature extraction submodule, is configured to use the feature letter of topography described in the Expression Recognition model extraction trained
Breath;
Identification determines submodule, is configured to use described Expression Recognition model according to described characteristic information, determines described expression
Recognition result.
11. devices according to claim 7, it is characterised in that described key point acquisition module, including:
Face scaling submodule, is configured to described human face region is zoomed to target size;
Key point locator module, is configured to position described key point from the described human face region after scaling.
12. according to the device described in any one of claim 7 to 11, it is characterised in that described Expression Recognition model is convolution god
Through network model.
13. 1 kinds of expression recognition devices, it is characterised in that described device includes:
Processor;
For storing the memorizer of the executable instruction of described processor;
Wherein, described processor is configured to:
Human face region is detected from image to be identified;
Obtain the key point in described human face region;
From described human face region, topography is extracted according to described key point;
Described topography is identified by the Expression Recognition model having used training, obtains Expression Recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610653790.2A CN106295566B (en) | 2016-08-10 | 2016-08-10 | Facial expression recognizing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610653790.2A CN106295566B (en) | 2016-08-10 | 2016-08-10 | Facial expression recognizing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106295566A true CN106295566A (en) | 2017-01-04 |
CN106295566B CN106295566B (en) | 2019-07-09 |
Family
ID=57668257
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610653790.2A Active CN106295566B (en) | 2016-08-10 | 2016-08-10 | Facial expression recognizing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106295566B (en) |
Cited By (36)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106778751A (en) * | 2017-02-20 | 2017-05-31 | 迈吉客科技(北京)有限公司 | A kind of non-face ROI recognition methods and device |
CN107369196A (en) * | 2017-06-30 | 2017-11-21 | 广东欧珀移动通信有限公司 | Expression, which packs, makees method, apparatus, storage medium and electronic equipment |
CN107633207A (en) * | 2017-08-17 | 2018-01-26 | 平安科技(深圳)有限公司 | AU characteristic recognition methods, device and storage medium |
CN107633204A (en) * | 2017-08-17 | 2018-01-26 | 平安科技(深圳)有限公司 | Face occlusion detection method, apparatus and storage medium |
CN107679448A (en) * | 2017-08-17 | 2018-02-09 | 平安科技(深圳)有限公司 | Eyeball action-analysing method, device and storage medium |
CN107679447A (en) * | 2017-08-17 | 2018-02-09 | 平安科技(深圳)有限公司 | Facial characteristics point detecting method, device and storage medium |
CN107832746A (en) * | 2017-12-01 | 2018-03-23 | 北京小米移动软件有限公司 | Expression recognition method and device |
CN107886070A (en) * | 2017-11-10 | 2018-04-06 | 北京小米移动软件有限公司 | Verification method, device and the equipment of facial image |
CN107958230A (en) * | 2017-12-22 | 2018-04-24 | 中国科学院深圳先进技术研究院 | Facial expression recognizing method and device |
CN108073910A (en) * | 2017-12-29 | 2018-05-25 | 百度在线网络技术(北京)有限公司 | For generating the method and apparatus of face characteristic |
CN108197593A (en) * | 2018-01-23 | 2018-06-22 | 深圳极视角科技有限公司 | More size face's expression recognition methods and device based on three-point positioning method |
CN108304793A (en) * | 2018-01-26 | 2018-07-20 | 北京易真学思教育科技有限公司 | On-line study analysis system and method |
CN108304936A (en) * | 2017-07-12 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Machine learning model training method and device, facial expression image sorting technique and device |
CN108304709A (en) * | 2018-01-31 | 2018-07-20 | 广东欧珀移动通信有限公司 | Face unlocking method and related product |
WO2018133034A1 (en) * | 2017-01-20 | 2018-07-26 | Intel Corporation | Dynamic emotion recognition in unconstrained scenarios |
CN108399370A (en) * | 2018-02-02 | 2018-08-14 | 达闼科技(北京)有限公司 | The method and cloud system of Expression Recognition |
CN108596221A (en) * | 2018-04-10 | 2018-09-28 | 江河瑞通(北京)技术有限公司 | The image-recognizing method and equipment of rod reading |
CN108710829A (en) * | 2018-04-19 | 2018-10-26 | 北京红云智胜科技有限公司 | A method of the expression classification based on deep learning and the detection of micro- expression |
CN109145871A (en) * | 2018-09-14 | 2019-01-04 | 广州杰赛科技股份有限公司 | Psychology and behavior recognition methods, device and storage medium |
CN109190487A (en) * | 2018-08-07 | 2019-01-11 | 平安科技(深圳)有限公司 | Face Emotion identification method, apparatus, computer equipment and storage medium |
CN109241835A (en) * | 2018-07-27 | 2019-01-18 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109271930A (en) * | 2018-09-14 | 2019-01-25 | 广州杰赛科技股份有限公司 | Micro- expression recognition method, device and storage medium |
WO2019033568A1 (en) * | 2017-08-17 | 2019-02-21 | 平安科技(深圳)有限公司 | Lip movement capturing method, apparatus and storage medium |
CN109934173A (en) * | 2019-03-14 | 2019-06-25 | 腾讯科技(深圳)有限公司 | Expression recognition method, device and electronic equipment |
CN109977867A (en) * | 2019-03-26 | 2019-07-05 | 厦门瑞为信息技术有限公司 | A kind of infrared biopsy method based on machine learning multiple features fusion |
CN110020638A (en) * | 2019-04-17 | 2019-07-16 | 唐晓颖 | Facial expression recognizing method, device, equipment and medium |
CN110147805A (en) * | 2018-07-23 | 2019-08-20 | 腾讯科技(深圳)有限公司 | Image processing method, device, terminal and storage medium |
CN111008589A (en) * | 2019-12-02 | 2020-04-14 | 杭州网易云音乐科技有限公司 | Face key point detection method, medium, device and computing equipment |
CN112580527A (en) * | 2020-12-22 | 2021-03-30 | 之江实验室 | Facial expression recognition method based on convolution long-term and short-term memory network |
CN112818838A (en) * | 2021-01-29 | 2021-05-18 | 北京嘀嘀无限科技发展有限公司 | Expression recognition method and device and electronic equipment |
CN112913253A (en) * | 2018-11-13 | 2021-06-04 | 北京比特大陆科技有限公司 | Image processing method, apparatus, device, storage medium, and program product |
WO2021135509A1 (en) * | 2019-12-30 | 2021-07-08 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus, electronic device, and storage medium |
CN113128309A (en) * | 2020-01-10 | 2021-07-16 | 中移(上海)信息通信科技有限公司 | Facial expression recognition method, device, equipment and medium |
CN113807205A (en) * | 2021-08-30 | 2021-12-17 | 中科尚易健康科技(北京)有限公司 | Locally enhanced human meridian recognition method and device, equipment and storage medium |
CN114120386A (en) * | 2020-08-31 | 2022-03-01 | 腾讯科技(深圳)有限公司 | Face recognition method, device, equipment and storage medium |
CN117373100A (en) * | 2023-12-08 | 2024-01-09 | 成都乐超人科技有限公司 | Face recognition method and system based on differential quantization local binary pattern |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104346607A (en) * | 2014-11-06 | 2015-02-11 | 上海电机学院 | Face recognition method based on convolutional neural network |
CN104850825A (en) * | 2015-04-18 | 2015-08-19 | 中国计量学院 | Facial image face score calculating method based on convolutional neural network |
CN105005774A (en) * | 2015-07-28 | 2015-10-28 | 中国科学院自动化研究所 | Face relative relation recognition method based on convolutional neural network and device thereof |
CN105469087A (en) * | 2015-07-13 | 2016-04-06 | 百度在线网络技术(北京)有限公司 | Method for identifying clothes image, and labeling method and device of clothes image |
CN105654049A (en) * | 2015-12-29 | 2016-06-08 | 中国科学院深圳先进技术研究院 | Facial expression recognition method and device |
CN105825192A (en) * | 2016-03-24 | 2016-08-03 | 深圳大学 | Facial expression identification method and system |
-
2016
- 2016-08-10 CN CN201610653790.2A patent/CN106295566B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104346607A (en) * | 2014-11-06 | 2015-02-11 | 上海电机学院 | Face recognition method based on convolutional neural network |
CN104850825A (en) * | 2015-04-18 | 2015-08-19 | 中国计量学院 | Facial image face score calculating method based on convolutional neural network |
CN105469087A (en) * | 2015-07-13 | 2016-04-06 | 百度在线网络技术(北京)有限公司 | Method for identifying clothes image, and labeling method and device of clothes image |
CN105005774A (en) * | 2015-07-28 | 2015-10-28 | 中国科学院自动化研究所 | Face relative relation recognition method based on convolutional neural network and device thereof |
CN105654049A (en) * | 2015-12-29 | 2016-06-08 | 中国科学院深圳先进技术研究院 | Facial expression recognition method and device |
CN105825192A (en) * | 2016-03-24 | 2016-08-03 | 深圳大学 | Facial expression identification method and system |
Cited By (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018133034A1 (en) * | 2017-01-20 | 2018-07-26 | Intel Corporation | Dynamic emotion recognition in unconstrained scenarios |
US11151361B2 (en) | 2017-01-20 | 2021-10-19 | Intel Corporation | Dynamic emotion recognition in unconstrained scenarios |
CN106778751A (en) * | 2017-02-20 | 2017-05-31 | 迈吉客科技(北京)有限公司 | A kind of non-face ROI recognition methods and device |
CN106778751B (en) * | 2017-02-20 | 2020-08-21 | 迈吉客科技(北京)有限公司 | Non-facial ROI (region of interest) identification method and device |
WO2018149350A1 (en) * | 2017-02-20 | 2018-08-23 | 迈吉客科技(北京)有限公司 | Method and apparatus for recognising non-facial roi |
CN107369196A (en) * | 2017-06-30 | 2017-11-21 | 广东欧珀移动通信有限公司 | Expression, which packs, makees method, apparatus, storage medium and electronic equipment |
CN108304936B (en) * | 2017-07-12 | 2021-11-16 | 腾讯科技(深圳)有限公司 | Machine learning model training method and device, and expression image classification method and device |
US11537884B2 (en) | 2017-07-12 | 2022-12-27 | Tencent Technology (Shenzhen) Company Limited | Machine learning model training method and device, and expression image classification method and device |
CN108304936A (en) * | 2017-07-12 | 2018-07-20 | 腾讯科技(深圳)有限公司 | Machine learning model training method and device, facial expression image sorting technique and device |
CN107633207B (en) * | 2017-08-17 | 2018-10-12 | 平安科技(深圳)有限公司 | AU characteristic recognition methods, device and storage medium |
CN107679448A (en) * | 2017-08-17 | 2018-02-09 | 平安科技(深圳)有限公司 | Eyeball action-analysing method, device and storage medium |
CN107633204B (en) * | 2017-08-17 | 2019-01-29 | 平安科技(深圳)有限公司 | Face occlusion detection method, apparatus and storage medium |
CN107633207A (en) * | 2017-08-17 | 2018-01-26 | 平安科技(深圳)有限公司 | AU characteristic recognition methods, device and storage medium |
US10489636B2 (en) | 2017-08-17 | 2019-11-26 | Ping An Technology (Shenzhen) Co., Ltd. | Lip movement capturing method and device, and storage medium |
CN107633204A (en) * | 2017-08-17 | 2018-01-26 | 平安科技(深圳)有限公司 | Face occlusion detection method, apparatus and storage medium |
WO2019033568A1 (en) * | 2017-08-17 | 2019-02-21 | 平安科技(深圳)有限公司 | Lip movement capturing method, apparatus and storage medium |
CN107679447A (en) * | 2017-08-17 | 2018-02-09 | 平安科技(深圳)有限公司 | Facial characteristics point detecting method, device and storage medium |
CN107886070A (en) * | 2017-11-10 | 2018-04-06 | 北京小米移动软件有限公司 | Verification method, device and the equipment of facial image |
CN107832746A (en) * | 2017-12-01 | 2018-03-23 | 北京小米移动软件有限公司 | Expression recognition method and device |
CN107958230B (en) * | 2017-12-22 | 2020-06-23 | 中国科学院深圳先进技术研究院 | Facial expression recognition method and device |
CN107958230A (en) * | 2017-12-22 | 2018-04-24 | 中国科学院深圳先进技术研究院 | Facial expression recognizing method and device |
US11270099B2 (en) | 2017-12-29 | 2022-03-08 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method and apparatus for generating facial feature |
CN108073910A (en) * | 2017-12-29 | 2018-05-25 | 百度在线网络技术(北京)有限公司 | For generating the method and apparatus of face characteristic |
CN108197593B (en) * | 2018-01-23 | 2022-02-18 | 深圳极视角科技有限公司 | Multi-size facial expression recognition method and device based on three-point positioning method |
CN108197593A (en) * | 2018-01-23 | 2018-06-22 | 深圳极视角科技有限公司 | More size face's expression recognition methods and device based on three-point positioning method |
CN108304793A (en) * | 2018-01-26 | 2018-07-20 | 北京易真学思教育科技有限公司 | On-line study analysis system and method |
CN108304793B (en) * | 2018-01-26 | 2021-01-08 | 北京世纪好未来教育科技有限公司 | Online learning analysis system and method |
CN108304709A (en) * | 2018-01-31 | 2018-07-20 | 广东欧珀移动通信有限公司 | Face unlocking method and related product |
CN108304709B (en) * | 2018-01-31 | 2022-01-04 | Oppo广东移动通信有限公司 | Face unlocking method and related product |
CN108399370A (en) * | 2018-02-02 | 2018-08-14 | 达闼科技(北京)有限公司 | The method and cloud system of Expression Recognition |
CN108596221B (en) * | 2018-04-10 | 2020-12-01 | 江河瑞通(北京)技术有限公司 | Image recognition method and device for scale reading |
CN108596221A (en) * | 2018-04-10 | 2018-09-28 | 江河瑞通(北京)技术有限公司 | The image-recognizing method and equipment of rod reading |
CN108710829A (en) * | 2018-04-19 | 2018-10-26 | 北京红云智胜科技有限公司 | A method of the expression classification based on deep learning and the detection of micro- expression |
CN110147805A (en) * | 2018-07-23 | 2019-08-20 | 腾讯科技(深圳)有限公司 | Image processing method, device, terminal and storage medium |
US11631275B2 (en) | 2018-07-23 | 2023-04-18 | Tencent Technology (Shenzhen) Company Limited | Image processing method and apparatus, terminal, and computer-readable storage medium |
CN109241835A (en) * | 2018-07-27 | 2019-01-18 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
CN109190487A (en) * | 2018-08-07 | 2019-01-11 | 平安科技(深圳)有限公司 | Face Emotion identification method, apparatus, computer equipment and storage medium |
CN109271930B (en) * | 2018-09-14 | 2020-11-13 | 广州杰赛科技股份有限公司 | Micro-expression recognition method, device and storage medium |
CN109145871B (en) * | 2018-09-14 | 2020-09-15 | 广州杰赛科技股份有限公司 | Psychological behavior recognition method, device and storage medium |
CN109145871A (en) * | 2018-09-14 | 2019-01-04 | 广州杰赛科技股份有限公司 | Psychology and behavior recognition methods, device and storage medium |
CN109271930A (en) * | 2018-09-14 | 2019-01-25 | 广州杰赛科技股份有限公司 | Micro- expression recognition method, device and storage medium |
CN112913253A (en) * | 2018-11-13 | 2021-06-04 | 北京比特大陆科技有限公司 | Image processing method, apparatus, device, storage medium, and program product |
CN109934173A (en) * | 2019-03-14 | 2019-06-25 | 腾讯科技(深圳)有限公司 | Expression recognition method, device and electronic equipment |
CN109934173B (en) * | 2019-03-14 | 2023-11-21 | 腾讯科技(深圳)有限公司 | Expression recognition method and device and electronic equipment |
CN109977867A (en) * | 2019-03-26 | 2019-07-05 | 厦门瑞为信息技术有限公司 | A kind of infrared biopsy method based on machine learning multiple features fusion |
CN110020638B (en) * | 2019-04-17 | 2023-05-12 | 唐晓颖 | Facial expression recognition method, device, equipment and medium |
CN110020638A (en) * | 2019-04-17 | 2019-07-16 | 唐晓颖 | Facial expression recognizing method, device, equipment and medium |
CN111008589A (en) * | 2019-12-02 | 2020-04-14 | 杭州网易云音乐科技有限公司 | Face key point detection method, medium, device and computing equipment |
CN111008589B (en) * | 2019-12-02 | 2024-04-09 | 杭州网易云音乐科技有限公司 | Face key point detection method, medium, device and computing equipment |
WO2021135509A1 (en) * | 2019-12-30 | 2021-07-08 | 腾讯科技(深圳)有限公司 | Image processing method and apparatus, electronic device, and storage medium |
CN113128309A (en) * | 2020-01-10 | 2021-07-16 | 中移(上海)信息通信科技有限公司 | Facial expression recognition method, device, equipment and medium |
CN114120386A (en) * | 2020-08-31 | 2022-03-01 | 腾讯科技(深圳)有限公司 | Face recognition method, device, equipment and storage medium |
CN112580527A (en) * | 2020-12-22 | 2021-03-30 | 之江实验室 | Facial expression recognition method based on convolution long-term and short-term memory network |
CN112818838A (en) * | 2021-01-29 | 2021-05-18 | 北京嘀嘀无限科技发展有限公司 | Expression recognition method and device and electronic equipment |
CN113807205A (en) * | 2021-08-30 | 2021-12-17 | 中科尚易健康科技(北京)有限公司 | Locally enhanced human meridian recognition method and device, equipment and storage medium |
CN117373100A (en) * | 2023-12-08 | 2024-01-09 | 成都乐超人科技有限公司 | Face recognition method and system based on differential quantization local binary pattern |
CN117373100B (en) * | 2023-12-08 | 2024-02-23 | 成都乐超人科技有限公司 | Face recognition method and system based on differential quantization local binary pattern |
Also Published As
Publication number | Publication date |
---|---|
CN106295566B (en) | 2019-07-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106295566B (en) | Facial expression recognizing method and device | |
CN105631408B (en) | Face photo album processing method and device based on video | |
CN106548145A (en) | Image-recognizing method and device | |
CN105631403B (en) | Face identification method and device | |
CN104408402B (en) | Face identification method and device | |
CN106295511B (en) | Face tracking method and device | |
CN105426867A (en) | Face identification verification method and apparatus | |
CN104408426B (en) | Facial image glasses minimizing technology and device | |
CN106339680A (en) | Human face key point positioning method and device | |
CN106295515A (en) | Determine the method and device of human face region in image | |
CN105224924A (en) | Living body faces recognition methods and device | |
CN105095881A (en) | Method, apparatus and terminal for face identification | |
CN105488527A (en) | Image classification method and apparatus | |
CN107886070A (en) | Verification method, device and the equipment of facial image | |
CN107463903B (en) | Face key point positioning method and device | |
CN105512605A (en) | Face image processing method and device | |
CN106778531A (en) | Face detection method and device | |
CN105528078B (en) | The method and device of controlling electronic devices | |
CN106980840A (en) | Shape of face matching process, device and storage medium | |
CN104077563B (en) | Face identification method and device | |
CN106295530A (en) | Face identification method and device | |
CN106971164A (en) | Shape of face matching process and device | |
CN104077597B (en) | Image classification method and device | |
CN106169075A (en) | Auth method and device | |
CN105354560A (en) | Fingerprint identification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |