CN106355066A - Face authentication method and face authentication device - Google Patents
Face authentication method and face authentication device Download PDFInfo
- Publication number
- CN106355066A CN106355066A CN201610744512.8A CN201610744512A CN106355066A CN 106355066 A CN106355066 A CN 106355066A CN 201610744512 A CN201610744512 A CN 201610744512A CN 106355066 A CN106355066 A CN 106355066A
- Authority
- CN
- China
- Prior art keywords
- verified
- human face
- face
- images
- convolutional neural
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Abstract
The invention relates to the vision field of a computer and discloses a face authentication method and a face authentication device. The face authentication method comprises the steps of selecting N face poses and presetting corresponding a bilateral depth convolution nerve network mode to each face pose, wherein N is natural number; respectively determining the face poses of two images to be verified; according to determined face poses, selecting the corresponding bilateral depth convolution nerve network mode; discriminating two images to be verified by the selected bilateral depth convolution nerve network mode; confirming whether the faces in two images to be verified are the same or not according to recognition results. By the face authentication method and the face authentication device, a problem that the current method and the current device have low discrimination rate and inaccurate recognition caused due to difference of complicated environment and authenticated images is solved.
Description
Technical field
The present invention relates to computer vision field, particularly to a kind of face verification method and face verification device.
Background technology
With the development of computer vision technique and image recognition technology, whether it is same by recognizing color and area validation
The face verification mode of one face, the several scenes such as logs in, concludes the business and being used widely in system of real name checking, user.
The method of face verification has much at present, is generally divided into two classes, and a class is to be extracted based on traditional characteristic and compare
Method;Another kind of is method based on deep learning.But, the inventors discovered that at least exist in prior art asking as follows
Topic: although traditional aspect ratio is fast to speed, affected by environment larger, on the whole discrimination fluctuation is larger;Based on god
Method discrimination through network is higher, and the tolerance for environmental change is good, and existing frequently-used is based on single depth nerve net
Network model extraction feature and then the method comparing, but then need in identification successively to calculate the feature of image twice, the time is consumed
When, substantially it is also a kind of feature extractor, does not more targetedly consider the difference of two width images during face verification.
This results in the difference being easy to that during face verification in image affected by environment and to be verified, itself exists, and causes
Face verification differentiation rate is low, and the result is not accurate enough.
Content of the invention
The purpose of embodiment of the present invention is to provide a kind of face verification method and face verification device so as to be tested
In card image, scene residing for personage is complicated, in the case that image to be verified has differences in itself, still can be to images to be recognized
In face carry out accurate validation, substantially increase checking rate and the accuracy rate of face verification, and there is good robustness.
For solving above-mentioned technical problem, embodiments of the present invention provide a kind of face verification method, comprising: selected n
Individual human face posture, presets corresponding bilateral depth convolutional neural networks model to each human face posture, and described n is natural number;Point
Do not determine the human face posture of described two images to be verified;According to determined by human face posture, choose corresponding bilateral depth volume
Long-pending neural network model;Using selected two images to be verified described in bilateral depth convolutional neural networks Model checking;Profit
Whether identical with differentiating the face in two images to be verified described in results verification.
Embodiments of the present invention additionally provide a kind of face verification device, comprising: presetting module, for selecting n people
Face attitude, presets corresponding bilateral depth convolutional neural networks model to each human face posture, and described n is natural number;Determine mould
Block, for determining the human face posture of described two images to be verified respectively;Choose module, for face appearance determined by basis
State, chooses corresponding bilateral depth convolutional neural networks model;Discrimination module, for using selected bilateral depth convolution god
Differentiate described two images to be verified through network model;Authentication module, for be verified using two described in differentiation results verification
Whether the face in image is identical.
Embodiment of the present invention in terms of existing technologies, in face verification, takes the bilateral checking network of cascade to enter
Two images to be verified are carried out human face posture determination by row face verification respectively, and then the human face posture for determining is selected and corresponded to
Bilateral depth convolutional neural networks model so that can be different using different bilateral depth convolutional neural networks Model checkings
The image of human face posture.Image due to identical attitude has more comparability, therefore greatly improves the checking rate of face verification
And accuracy rate, it is simultaneously suitable for the image that there is complex scene, there is good robustness good.
In addition, human face posture is: positive face, left side of the face, right side face, face upward head or bow.By human face posture is divided into often
The human face posture seen, so that image to be verified rapidly and efficiently can match corresponding human face posture, improves verification efficiency.
In addition it is characterised in that default bilateral depth convolutional neural networks model, obtained using following methods: default
Including the Sample Storehouse of m width facial image, described m is the natural number more than 2;Determine the human face posture of described m width facial image;Will
Facial image in default Sample Storehouse matches two-by-two, selects the combination being all the first human face posture, as identical attitude group, selects
Go out a width facial image and belong to the first human face posture, another width facial image is not belonging to the combination of the first human face posture, as not
With attitude group;Wherein, described first human face posture is one of described n human face posture human face posture;Using default double
Side depth convolutional neural networks framework, respectively to described identical attitude group and described difference attitude group in facial image combine into
Row training, obtains the default bilateral depth convolutional neural networks model of corresponding described first human face posture.By using calculating
Device learning method is so that default bilateral depth convolutional neural networks model can quickly be obtained.
In addition, in the human face posture of determination m width facial image, described facial image is the facial image through overcorrection.Logical
Cross and facial image is corrected, increased the identification degree of face in image to be verified.
In addition, in determining the human face posture of described two images to be verified respectively, specifically including: positioning two is to be tested respectively
The key point of face in card image;Determine the human face posture of described two images to be verified using positioning result.By adopting sdm
(security device manager has supervision descent algorithm) method carries out the positioning of face key point, can accurately obtain
Human face posture in two images to be verified.
In addition, determining in the human face posture of described two images to be verified using positioning result, described image to be verified is
Image to be verified through overcorrection.By being corrected image to be verified, increased human face posture in image to be verified can
Resolution.
In addition, human face posture determined by described basis, choose in corresponding bilateral depth convolutional neural networks model, such as
The human face posture that really described two images to be verified are determined is different, then choose what described two images to be verified were determined respectively
Bilateral depth convolutional neural networks model corresponding to human face posture;Described using selected bilateral depth convolutional neural networks
In two images to be verified described in Model checking, the bilateral depth convolutional neural networks model chosen using each is respectively to described
Two images to be verified are differentiated, obtain four differentiation results.By carrying out to the human face posture in two images to be verified
Differentiate, the human face posture in two images to be verified is divided into identical and different, choose bilateral depth convolution god accordingly respectively
Differentiated through network model, improve distinguishing speed and accuracy.
In addition it is characterised in that using the face differentiating in two images to be verified described in results verification whether identical in,
If differentiate result differ, utilize Method of Evidence Theory, merge described four differentiation results, determine described two to be verified
Whether the face in image is identical.By using Method of Evidence Theory by feelings different for the human face posture in two images to be verified
The 4 differentiation results obtaining under condition are merged so that the face verification result being finally given is more accurate.
Brief description
Fig. 1 is a kind of flow chart of first embodiment of the invention face verification method;
Fig. 2 is a kind of flow chart of second embodiment of the invention face verification method;
Fig. 3 is the schematic diagram of Face datection in a kind of second embodiment of the invention face verification method;
Fig. 4 is the schematic diagram of face key point location in a kind of second embodiment of the invention face verification method;
Fig. 5 is a kind of flow chart of third embodiment of the invention face verification method;
Fig. 6 is a kind of flow chart of four embodiment of the invention face verification method;
Fig. 7 is a kind of structured flowchart of fifth embodiment of the invention face verification device;
Fig. 8 is the user terminal actual device structured flowchart of sixth embodiment of the invention.
Specific embodiment
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with each reality to the present invention for the accompanying drawing
The mode of applying is explained in detail.However, it will be understood by those skilled in the art that in each embodiment of the present invention,
In order that reader more fully understands that the application proposes many ins and outs.But, even if there is no these ins and outs and base
In following embodiment many variations and modification it is also possible to realize the application technical scheme required for protection.
The first embodiment of the present invention is related to a kind of face verification method, and concrete operations flow process is as shown in Figure 1.
In a step 101, select 3 human face postures, and preset bilateral depth convolutional neural networks model.
Specifically, after selecting 3 human face postures, need each human face posture is preset with corresponding bilateral depth convolution god
Through network model.
It should be noted that in actual applications, selected human face posture is not limited to 3, can be multiple.
In a step 102, determine the human face posture of two images to be verified respectively.
In step 103, choose bilateral depth convolutional neural networks model.
Specifically, human face posture determined by according to the bilateral depth convolutional neural networks model of selection is being selected
Take.
At step 104, two images to be verified are differentiated.
Specifically, differentiate two images to be verified need using selected bilateral depth convolutional neural networks model Lai
Differentiated.
In step 105, confirm whether the face in two images to be verified is identical.
Specifically, confirming whether the face in two images to be verified is identical needs using to two images to be verified
Differentiate result to be confirmed.
Present embodiment, in face verification, is treated authentication image and is carried out a point poses discrimination, different attitudes is using different
Bilateral depth convolutional neural networks model, due to identical attitude image have more comparability, therefore greatly improve checking
Accuracy rate, is simultaneously suitable for the image that there is complex scene, and robustness is good.
Second embodiment of the present invention is related to a kind of face verification method, and present embodiment is the excellent of first embodiment
Change, using machine learning method, obtain default bilateral depth convolutional neural networks model, and determined according to the key point of face
, so that more accurate to face verification, concrete operations flow process is as shown in Figure 2 for the human face posture of image to be verified.
In step 201, select 3 human face postures, and preset bilateral depth convolutional neural networks using machine learning method
Model.
Specifically, during realizing present embodiment, 3 human face postures for selecting preset bilateral depth respectively
In convolutional neural networks model, specifically include:
First, preset the Sample Storehouse including m width facial image, wherein, m is the natural number more than 2.
Then, it is determined that the human face posture of m width facial image, and the facial image in default Sample Storehouse is matched two-by-two,
Select the combination being all the first human face posture, as identical attitude group, select a width facial image and belong to the first human face posture, separately
One width facial image is not belonging to the combination of the first human face posture, as different attitude groups;Wherein, described first human face posture is institute
State one of 3 human face postures human face posture;So-called distribution two-by-two can be the matching method allowing facial image to repeat,
Can also be the matching method not allowing facial image to repeat.
Finally, using default bilateral depth convolutional neural networks framework, respectively to identical attitude group and described difference appearance
Facial image combination in state group is trained, and obtains the default bilateral depth convolutional neural networks of corresponding first human face posture
Model.
Such as, default Sample Storehouse includes 6000 width facial images, and every width facial image all comprises: positive face,
Left side of the face and this 3 human face postures of right side face.Then match two-by-two, if being all first in two width facial images after distribution
People, and their human face posture is also identical, just using this two width facial image as identical attitude group, otherwise is then different attitudes
Group.After the completion of packet, it is trained respectively using default bilateral depth convolutional neural networks framework, finally give corresponding first
The default bilateral depth convolutional neural networks model of human face posture.
It should be noted that in present embodiment, human face posture can be: positive face, left side of the face and right side face, answer actual
With in the human face posture selected be not limited to this 3, can also be for multiple human face postures such as facing upward head, bow, here is no longer one by one
Enumerate;And, in the human face posture determining m width facial image, facial image is the facial image through overcorrection.
In step 202., using the key point of face, determine the human face posture of two images to be verified respectively.
Specifically, when determining the human face posture of two images to be verified, it is possible to use the mode of crucial point location is true
Determine human face posture.Specifically, position the key point of face in two images to be verified first respectively;Then utilize positioning result
Determine the human face posture of two images to be verified.
It should be noted that using existing haar (haar-link features Lis Hartel in present embodiment
Levy) method for detecting human face carries out facial image acquisition, as shown in Figure 3 it would be desirable to face area in carrying out the image of hair style identification
Domain is intercepted, and then carries out face using sdm (security device manager has supervision descent algorithm) method crucial
Point location, as shown in figure 4, by the positions such as the eyebrow in images to be recognized, eyes, face, nose or these position combination in any
The part obtaining afterwards is as key point.
In step 203, choose bilateral depth convolutional neural networks model.
Specifically, when the human face posture in two images to be verified is confirmed as identical, choosing from Sample Storehouse should
The corresponding bilateral depth convolutional neural networks model of human face posture is used for two images to be verified are carried out differentiating.
In step 204, two images to be verified are differentiated.
Specifically, using that the bilateral depth convolutional neural networks model chosen, two images to be verified are entered respectively
Row differentiates, obtains two differentiation results.
In step 205, confirm whether the face in two images to be verified is identical.
Specifically, by presetting a threshold value being used for being judged in systems, such as 0.5, will be by bilateral depth
The result that degree convolutional neural networks model obtains after being differentiated is judged, if differentiating that result is more than 0.5, judges two width
Face in image to be verified is same face, is proved to be successful;Otherwise judge face in two images to be verified not as with
One face, authentication failed.
It should be noted that default threshold size, can be configured it is not limited to 0.5 according to practical situation
One value.
In present embodiment, obtain default bilateral depth convolutional neural networks mould using machine learning method Fast Training
Type, accurately determines the human face posture in two images to be verified by face key point, and by being rectified facial image
Just remaking corresponding operating so that under complex scene, authentication image also can be treated and carries out accurate validation, there is more preferable robust
Property.
Third embodiment of the present invention is related to a kind of face verification method, and present embodiment and second embodiment are substantially
Identical, differ primarily in that, when the human face posture in two images to be verified is different, choose two different bilateral depth volumes
Long-pending neural network model is differentiated, and is merged the differentiation obtaining result using evidence theory, and concrete operations flow process is such as
Shown in Fig. 5.
In step 501, select 3 human face postures, and preset bilateral depth convolutional neural networks using machine learning method
Model.
In step 502, using the key point of face, determine the human face posture of two images to be verified respectively.
Due to the step 201 in step 501 in Fig. 5, step 502 and Fig. 2, step 202 just the same it is intended to selected face
Attitude, and be trained using machine learning, obtain and human face posture corresponding bilateral depth convolutional neural networks model respectively,
Then confirm the human face posture of two images to be verified respectively using the key point of face, repeat no more here.
In step 503, an image to be verified corresponding two bilateral depth convolutional neural networks models respectively are chosen.
Specifically, when the human face posture in two images to be verified is confirmed as differing, choose from Sample Storehouse
Two bilateral depth convolutional neural networks models corresponding to this two images to be verified respectively.
In step 504, two images to be verified are differentiated.
Specifically, using the two bilateral depth convolutional neural networks models chosen, two images to be verified are entered respectively
Row differentiates, obtains 4 differentiation results.
In step 505, using Method of Evidence Theory, confirm whether the face in two images to be verified is identical.
Specifically, by presetting a threshold value being used for being judged in systems, such as 0.5, will be double according to two
Side depth convolutional neural networks model obtains differentiating that result is merged according to evidence theory respectively, two after then merging
Result is compared with default threshold value respectively, during the result after two merge both greater than 0.5, then judges two figures to be verified
Face in picture is same face, is proved to be successful;Otherwise judge face in two images to be verified not as same face,
Authentication failed.
It should be noted that the evidence theory adopting in present embodiment is that dempster proposed first in 1967, by
His student shafer grew up further in 1976, therefore also referred to as ds evidence theory.In ds evidence theory, by mutual
Incompatible elementary sentence (it is assumed that) perfect set that forms is collectively referred to as identification framework, represent to a certain problem be possible to answer
Case, but only one of which answer is correct.The subset of this framework is referred to as proposition, and the trusting degree distributing to each proposition is referred to as
Basic probability assignment (bpa, also referred to as m function), m (a) is substantially credible number, reflects the reliability size to a.Belief function bel
A () represents trusting degree to proposition a, likelihood function pl (a) represents false trusting degree non-to proposition a, namely a is seemed can
The uncertainty measure that can set up, in fact, [bel (a), pl (a)] represents the indeterminacy section of a, [0, bel (a)] represents life
Topic a supporting evidence is interval, and [0, pl (a)] represents that the plan letter of proposition a is interval, and [pl (a), 1] represents the refusal evidence area of proposition a
Between.
Because ds evidence theory belongs to the common knowledge of this area, those skilled in the art can incite somebody to action according to prior art
Hair style classification results are merged according to ds evidence theory, so that it is determined that the hair style in images to be recognized, repeat no more here.
In present embodiment, obtain default bilateral depth convolutional neural networks mould using machine learning method Fast Training
Type, by accurately determining the human face posture in two images to be verified after being corrected facial image according to face key point,
And when in two images to be verified, human face posture is different, will be according to the bilateral depth convolution of two kinds of differences by ds evidence theory
The differentiation result that neural network model obtains is merged, and further improves the checking standard treating face in authentication image
Really property, and can preferably adapt to the scene of complexity, there is more preferable robustness.
4th embodiment of the present invention is related to a kind of face verification method, and present embodiment is the excellent of the 3rd embodiment
Change, before the key point using face is determined the human face posture of two images to be verified, need first to two figures to be verified
As being corrected, effectively increase the identification degree of face in image to be verified, concrete operations flow process is as shown in Figure 6.
In step 601, select 3 human face postures, and preset bilateral depth convolutional neural networks using machine learning method
Model.
Due to the step 201 in step 601 and Fig. 1 in Fig. 6 just the same it is intended to selected human face posture, and utilize machine
Study is trained, and obtains and human face posture corresponding bilateral depth convolutional neural networks model respectively, repeats no more here.
In step 602, two images to be verified are corrected.
Specifically, by being entered using existing haar (haar-link features Lis Hartel is levied) method for detecting human face
Pedestrian's face image obtains it would be desirable to the human face region in carrying out the image of hair style identification is intercepted, and then utilizes sdm
(security device manager has supervision descent algorithm) method carries out face key point location, by images to be recognized
Eyebrow, eyes, face, the part that obtains after the position such as nose or these position combination in any as key point, finally utilize
Key point rotates image to be verified, and/or utilizes key point deformation image to be verified, completes two images to be verified are rectified
Just.
In step 603, using the key point of face, determine the human face posture of two images to be verified respectively.
Specifically, when determining the human face posture of two images to be verified, it is possible to use the mode of crucial point location is true
Determine human face posture.Specifically, position the key point of face in two images to be verified first respectively;Then utilize positioning result
Determine the human face posture of two images to be verified.
In step 604, an image to be verified corresponding two bilateral depth convolutional neural networks models respectively are chosen.
Specifically, when the human face posture in two images to be verified is confirmed as differing, choose from Sample Storehouse
Two bilateral depth convolutional neural networks models corresponding to this two images to be verified respectively.
In step 605, two images to be verified are differentiated.
Specifically, using the two bilateral depth convolutional neural networks models chosen, two images to be verified are entered respectively
Row differentiates, obtains 4 differentiation results.
In step 606, using Method of Evidence Theory, confirm whether the face in two images to be verified is identical.
Specifically, by presetting a threshold value being used for being judged in systems, such as 0.5, will be double according to two
Side depth convolutional neural networks model obtains differentiating that result is merged according to evidence theory respectively, two after then merging
Result is compared with default threshold value respectively, during the result after two merge both greater than 0.5, then judges two figures to be verified
Face in picture is same face, is proved to be successful;Otherwise judge face in two images to be verified not as same face,
Authentication failed.
It should be noted that in actual applications, when the human face posture phase in two images to be verified it is also possible to elder generation
Two images to be verified are corrected, then judgement operation after execution, so can be lifted to be verified further
The accuracy of face verification in image.
In present embodiment, by correcting to the facial image in two images to be verified and Sample Storehouse, Ran Hou
Preset bilateral depth convolutional neural networks model using multiple human face postures that machine learning method is selected, by treating proof diagram
Human face posture in picture is determined, and selectes bilateral accordingly depth convolutional neural networks model and is judged, result is passed through
The mode of ds evidence theory fusion, substantially increases the identification degree of face in image to be verified, effectively improves face verification
Accuracy rate, and there is good robustness.
The step of various methods divides above, is intended merely to describe clear, can merge into when realizing a step or
Some steps are split, is decomposed into multiple steps, as long as comprising identical logical relation, all in the protection domain of this patent
Interior;To adding inessential modification in algorithm or in flow process or introducing inessential design, but do not change its algorithm
With the core design of flow process all in the protection domain of this patent.
Fifth embodiment of the invention is related to a kind of face verification device, and concrete structure is as shown in Figure 7.
Face verification device 700 includes: presetting module 701, determining module 702, choose module 703, discrimination module 704,
Authentication module 705.
Presetting module 701, for selecting n human face posture, presets corresponding bilateral depth convolution to each human face posture
Neural network model, wherein, n is natural number;
Determining module 702, for determining the human face posture of two images to be verified respectively;
Choose module 703, for human face posture determined by basis, choose corresponding bilateral depth convolutional neural networks mould
Type;
Discrimination module 704, for using selected bilateral two figures to be verified of depth convolutional neural networks Model checking
Picture;
Whether authentication module 705, for identical using the face in differentiation two images to be verified of results verification.
The face verification device being provided by present embodiment, in face verification, chooses module 703 according to presetting module
701 select multiple human face postures, and each human face posture is preset with corresponding bilateral depth convolutional neural networks model and determination
Module 702 determines the human face posture of two images to be verified, using discrimination module 704 be directed to different attitudes utilize different double
Side depth convolutional neural networks model, obtains result eventually through authentication module 705.This verification mode is due to identical attitude
Image has more comparability, therefore greatly improves differentiation accuracy rate, is simultaneously suitable for the image that there is complex scene, robustness
Good.
It is seen that, present embodiment is the system embodiment corresponding with first embodiment, and present embodiment can be with
First embodiment is worked in coordination enforcement.The relevant technical details mentioned in first embodiment still have in the present embodiment
Effect, in order to reduce repetition, repeats no more here.Correspondingly, the relevant technical details mentioned in present embodiment are also applicable in
In first embodiment.
Below the actual device structure of user terminal according to the present invention is illustrated.
6th embodiment of the present invention is related to a kind of user terminal, and its concrete structure is as shown in Figure 8.This user terminal
800 include: memorizer 801, processor 802, display 803.Wherein memorizer 801 is used for storing processor 802 and can perform generation
Code or other information.Wherein processor be terminal core, the determining module being related in said apparatus embodiment, choose module,
Function handled by discrimination module is mainly realized by processor 802.After wherein display 803 is used for video-stream processor 802 process
Data, and display 803 also has photographic head, can be used for obtaining the information of input, is then passed to processor 802 and enters
Row is processed.
In present embodiment, after the display 803 in user terminal 800 gets facial image to be verified, will obtain
The facial image taking passes to the process that processor 802 carries out Face datection and crucial point location, finally realizes face and corrects, so
The corresponding default bilateral depth convolutional neural networks model of each human face posture by being pre-stored in memorizer 801 is sentenced afterwards
Not, get recognition result, and by carrying out merging the face determining in images to be recognized each recognition result, complete face and test
Demonstrate,prove and pass through display 803 to show.
It is noted that involved each module in present embodiment is logic module, in actual applications, one
Individual logical block can be a part for a physical location or a physical location, can also be with multiple physics lists
The combination of unit is realized.Additionally, for the innovative part projecting the present invention, will not be with solution institute of the present invention in present embodiment
The unit that the technical problem relation of proposition is less close introduces, but this is not intended that in present embodiment there are not other lists
Unit.
It will be appreciated by those skilled in the art that all or part of step realized in above-described embodiment method can be by
Program to complete come the hardware to instruct correlation, and this program storage, in a storage medium, includes some instructions use so that one
Individual equipment (can be single-chip microcomputer, chip etc.) or processor (processor) execute each embodiment methods described of the application
All or part of step.And aforesaid storage medium includes: u disk, portable hard drive, read only memory (rom, read-only
Memory), random access memory (ram, random access memory), magnetic disc or CD etc. are various can store journey
The medium of sequence code.
It will be understood by those skilled in the art that the respective embodiments described above are to realize the specific embodiment of the present invention,
And in actual applications, can to it, various changes can be made in the form and details, without departing from the spirit and scope of the present invention.
Claims (10)
1. a kind of face verification method is it is characterised in that include:
Selected n human face posture, presets corresponding bilateral depth convolutional neural networks model to each human face posture, and described n is
Natural number more than or equal to 2;
Determine the human face posture of two images to be verified respectively;
According to the human face posture of described determination, choose corresponding bilateral depth convolutional neural networks model;
Using two images to be verified described in the bilateral depth convolutional neural networks Model checking of described selection, obtain differentiating knot
Really;
Whether identical using the described face differentiating in two images to be verified described in results verification.
2. face verification method according to claim 1 is it is characterised in that described human face posture at least includes positive face, a left side
Side face, right side face, face upward head or bow.
3. face verification method according to claim 1 is it is characterised in that described default bilateral depth convolutional Neural net
Network model, is obtained using following methods:
The default Sample Storehouse including m width facial image, described m is the natural number more than 2;
Determine the human face posture of described m width facial image;
Facial image in default Sample Storehouse is matched two-by-two, selects the combination being all the first human face posture, as identical appearance
State group, selects a width facial image and belongs to the first human face posture, another width facial image is not belonging to the combination of the first human face posture,
As different attitude groups;Wherein, described first human face posture is one of described n human face posture human face posture;
Using default bilateral depth convolutional neural networks framework, respectively in described identical attitude group and described difference attitude group
Facial image combination be trained, obtain the default bilateral depth convolutional neural networks mould of corresponding described first human face posture
Type.
4. face verification method according to claim 3 is it is characterised in that the face appearance of described determination m width facial image
In state, described facial image is the facial image through overcorrection.
5. face verification method according to claim 1 is it is characterised in that described determine described two figures to be verified respectively
In the human face posture of picture, specifically include:
Position the key point of face in two images to be verified respectively;
Determine the human face posture of described two images to be verified using positioning result.
6. face verification method according to claim 5 is it is characterised in that described determine described two width using positioning result
In the human face posture of image to be verified, described image to be verified is the image to be verified through overcorrection.
7. face verification method according to claim 6 is it is characterised in that described imagery exploitation following methods to be verified are rectified
Just:
Rotate described image to be verified using the key point being navigated to, and/or using to be verified described in described key point deformation
Image.
8. face verification method according to claim 1, it is characterised in that human face posture determined by described basis, is selected
Take in corresponding bilateral depth convolutional neural networks model, if the human face posture that described two images to be verified are determined is not
Same, then choose the bilateral depth convolutional neural networks mould corresponding to the human face posture that described two images to be verified are determined respectively
Type;
Described using in selected two images to be verified described in bilateral depth convolutional neural networks Model checking, utilize each
The bilateral depth convolutional neural networks model chosen differentiates to described two images to be verified respectively, obtains four and differentiates knot
Really.
9. face verification method according to claim 8 is it is characterised in that described utilization differentiates two width described in results verification
During whether the face in image to be verified is identical, if differentiating that result differs, utilizing Method of Evidence Theory, merging described four
Individual differentiation result, determines whether the face in described two images to be verified is identical.
10. a kind of face verification device is it is characterised in that include:
Presetting module, for selecting n human face posture, presets corresponding bilateral depth convolutional neural networks to each human face posture
Model, described n is more than or equal to 2 natural numbers;
Determining module, for determining the human face posture of two images to be verified respectively;
Choose module, for the human face posture according to described determination, choose corresponding bilateral depth convolutional neural networks model;
Discrimination module, for two figures to be verified described in the bilateral depth convolutional neural networks Model checking using described selection
Picture, obtains differentiating result;
Whether authentication module, for identical using the face in two images to be verified described in differentiation results verification.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610744512.8A CN106355066A (en) | 2016-08-28 | 2016-08-28 | Face authentication method and face authentication device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610744512.8A CN106355066A (en) | 2016-08-28 | 2016-08-28 | Face authentication method and face authentication device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106355066A true CN106355066A (en) | 2017-01-25 |
Family
ID=57856134
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610744512.8A Pending CN106355066A (en) | 2016-08-28 | 2016-08-28 | Face authentication method and face authentication device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106355066A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108229276A (en) * | 2017-03-31 | 2018-06-29 | 北京市商汤科技开发有限公司 | Neural metwork training and image processing method, device and electronic equipment |
CN109784243A (en) * | 2018-12-29 | 2019-05-21 | 网易(杭州)网络有限公司 | Identity determines method and device, neural network training method and device, medium |
CN110163169A (en) * | 2019-05-27 | 2019-08-23 | 北京达佳互联信息技术有限公司 | Face identification method, device, electronic equipment and storage medium |
CN111401161A (en) * | 2020-03-04 | 2020-07-10 | 青岛海信网络科技股份有限公司 | Intelligent building management and control system for realizing behavior recognition based on intelligent video analysis algorithm |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012128319A1 (en) * | 2011-03-24 | 2012-09-27 | 株式会社ニコン | Electronic device, operator estimation method and program |
CN102831413A (en) * | 2012-09-11 | 2012-12-19 | 上海中原电子技术工程有限公司 | Face identification method and face identification system based on fusion of multiple classifiers |
CN103605972A (en) * | 2013-12-10 | 2014-02-26 | 康江科技(北京)有限责任公司 | Non-restricted environment face verification method based on block depth neural network |
CN104463237A (en) * | 2014-12-18 | 2015-03-25 | 中科创达软件股份有限公司 | Human face verification method and device based on multi-posture recognition |
CN105117692A (en) * | 2015-08-05 | 2015-12-02 | 福州瑞芯微电子股份有限公司 | Real-time face identification method and system based on deep learning |
-
2016
- 2016-08-28 CN CN201610744512.8A patent/CN106355066A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012128319A1 (en) * | 2011-03-24 | 2012-09-27 | 株式会社ニコン | Electronic device, operator estimation method and program |
CN102831413A (en) * | 2012-09-11 | 2012-12-19 | 上海中原电子技术工程有限公司 | Face identification method and face identification system based on fusion of multiple classifiers |
CN103605972A (en) * | 2013-12-10 | 2014-02-26 | 康江科技(北京)有限责任公司 | Non-restricted environment face verification method based on block depth neural network |
CN104463237A (en) * | 2014-12-18 | 2015-03-25 | 中科创达软件股份有限公司 | Human face verification method and device based on multi-posture recognition |
CN105117692A (en) * | 2015-08-05 | 2015-12-02 | 福州瑞芯微电子股份有限公司 | Real-time face identification method and system based on deep learning |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108229276A (en) * | 2017-03-31 | 2018-06-29 | 北京市商汤科技开发有限公司 | Neural metwork training and image processing method, device and electronic equipment |
CN108229276B (en) * | 2017-03-31 | 2020-08-11 | 北京市商汤科技开发有限公司 | Neural network training and image processing method and device and electronic equipment |
CN109784243A (en) * | 2018-12-29 | 2019-05-21 | 网易(杭州)网络有限公司 | Identity determines method and device, neural network training method and device, medium |
CN110163169A (en) * | 2019-05-27 | 2019-08-23 | 北京达佳互联信息技术有限公司 | Face identification method, device, electronic equipment and storage medium |
CN111401161A (en) * | 2020-03-04 | 2020-07-10 | 青岛海信网络科技股份有限公司 | Intelligent building management and control system for realizing behavior recognition based on intelligent video analysis algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3961441B1 (en) | Identity verification method and apparatus, computer device and storage medium | |
CN105518708B (en) | For verifying the method for living body faces, equipment and computer program product | |
US7873189B2 (en) | Face recognition by dividing an image and evaluating a similarity vector with a support vector machine | |
CN106447625A (en) | Facial image series-based attribute identification method and device | |
CN106022317A (en) | Face identification method and apparatus | |
CN103473492B (en) | Authority recognition method and user terminal | |
CN110059560B (en) | Face recognition method, device and equipment | |
CN106415594A (en) | A method and a system for face verification | |
CN109145766A (en) | Model training method, device, recognition methods, electronic equipment and storage medium | |
CN106355066A (en) | Face authentication method and face authentication device | |
CN109886222A (en) | Face identification method, neural network training method, device and electronic equipment | |
CN109389153A (en) | A kind of holographic false proof code check method and device | |
CN110348331A (en) | Face identification method and electronic equipment | |
CN106650670A (en) | Method and device for detection of living body face video | |
CN110503099B (en) | Information identification method based on deep learning and related equipment | |
CN109389098B (en) | Verification method and system based on lip language identification | |
CN111309222B (en) | Sliding block notch positioning and dragging track generation method for sliding block verification code | |
CN111310156B (en) | Automatic identification method and system for slider verification code | |
CN110222780A (en) | Object detecting method, device, equipment and storage medium | |
US11893773B2 (en) | Finger vein comparison method, computer equipment, and storage medium | |
CN106295620A (en) | Hair style recognition methods and hair style identification device | |
Rodríguez et al. | HD-MR: A new algorithm for number recognition in electrical meters | |
CN113192602A (en) | Test system for student mental health detection | |
CN108334869A (en) | Selection, face identification method and the device and electronic equipment of face component | |
CN111539390A (en) | Small target image identification method, equipment and system based on Yolov3 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20170125 |
|
WD01 | Invention patent application deemed withdrawn after publication |