CN106169075A - Auth method and device - Google Patents

Auth method and device Download PDF

Info

Publication number
CN106169075A
CN106169075A CN201610543529.7A CN201610543529A CN106169075A CN 106169075 A CN106169075 A CN 106169075A CN 201610543529 A CN201610543529 A CN 201610543529A CN 106169075 A CN106169075 A CN 106169075A
Authority
CN
China
Prior art keywords
facial
image
elements
loss function
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610543529.7A
Other languages
Chinese (zh)
Inventor
张涛
张旭华
万韶华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201610543529.7A priority Critical patent/CN106169075A/en
Publication of CN106169075A publication Critical patent/CN106169075A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/169Holistic features and representations, i.e. based on the facial image taken as a whole
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

Present disclose provides a kind of auth method and device, belong to technical field of face recognition.Method includes: obtain at least two facial images of photographic head Real-time Collection;Every the facial image collected is carried out image procossing, obtains the multiple facial elements in described every facial image and face characteristic;According to the multiple facial elements in described every facial image, determine described in whether at least two facial images exist appointment image;When described at least two facial images at least exist the described appointment image of predetermined number, it is verified.The disclosure use comprise multiple loss function checking model Real-time Collection to multiple facial images process, the positional information of human face characteristic point can not only be exported, multiple facial elements can also be extracted simultaneously, it can be avoided that use different graders that different facial elements are extracted the computationally intensive problem caused, additionally it is possible to ensure multiple facial elements and the accuracy of face characteristic extracted.

Description

Auth method and device
Technical field
It relates to technical field of face recognition, particularly relate to a kind of auth method and device.
Background technology
Face recognition technology refers to the computer technology utilizing com-parison and analysis face visual signature information to carry out identity discriminating. The actual a series of correlation techniques including building face identification system of the recognition of face of broad sense, including man face image acquiring, face Location, recognition of face pretreatment, authentication and identity finder etc.;And the recognition of face of narrow sense refers in particular to carry out body by face Part checking or the technology of identity finder or system.Specifically, face recognition technology is face feature based on people, to input Facial image or video flowing, first determine whether whether there is face in image or video flowing, if there is face, examine the most further Measure the positional information of each major facial organ in the position of each face, size, and each face.And believe according to these Breath, extracts the identity characteristic that each face is corresponding further, and it is contrasted with known face, thus identify everyone The identity that face is corresponding.
Increasingly mature along with face recognition technology, the application scenarios of this technology also gets more and more, such as gate control system, online Authentications etc., this technology is applied in gate control system it can be avoided that the random turnover of non-internal staff, is applied at online identity Checking being able to ensure that, only validated user could perform corresponding on-line operation.Carry out identity by face recognition technology to test Card, has the advantages that safety is high, and then is able to ensure that user profile and property safety.
Summary of the invention
For overcoming problem present in correlation technique, the disclosure provides a kind of auth method and device.
First aspect according to disclosure embodiment, it is provided that a kind of auth method, including:
Obtain at least two facial images of photographic head Real-time Collection;
Every the facial image collected is carried out image procossing, obtains the multiple face units in described every facial image Element and face characteristic;
According to the multiple facial elements in described every facial image, determine described in whether at least two facial images are deposited Specifying image, described appointment image is to specify facial elements to meet the face figure of preset standard in the plurality of facial elements Picture;
When described at least two facial images exist described appointment image, it is verified.
By use comprise multiple loss function checking model Real-time Collection at least two facial images at Reason, can not only export the positional information of human face characteristic point, it is also possible to extract multiple facial elements, it is possible to avoid using not simultaneously Different facial elements are extracted the computationally intensive problem caused by same grader, additionally, due to checking model is corresponding Treat that training pattern comprises multiple loss function, therefore, it is possible to the checking model making training obtain has higher precision with accurate Degree, and then ensure that the multiple facial elements and the accuracy of face characteristic extracted.
In the first possible implementation of the first aspect of the disclosure, every the facial image collected is carried out figure As processing, obtain the multiple facial elements in described every facial image and face characteristic information, including:
By described every facial image input validation model, obtain multiple facial elements and the people of described every facial image Face feature, described checking model for extract the facial elements in facial image and analyze face characteristic, described checking model by Convolutional neural networks and multiple loss function are constituted.
At least two facial images collected are processed by the checking model comprising multiple loss function by use, The positional information of human face characteristic point can not only be exported, it is also possible to extract multiple facial elements, it is possible to avoid using difference simultaneously Grader different facial elements are extracted the computationally intensive problem caused.
The second in the first aspect of the disclosure may be in implementation, multiple according in described every facial image Facial elements, after whether there is appointment image at least two facial images described in detection, described method also includes:
The amount of images of the described appointment image at least two facial images described in acquisition;
Whether detect described amount of images more than predetermined number;
When described amount of images is more than described predetermined number, it is verified;
When described amount of images is not more than described predetermined number, checking is not passed through.
In the third possible implementation of the first aspect of the disclosure, the plurality of facial elements at least includes eye Status information, mouth status information, head state information, nose status information.
By obtaining multiple facial elements, it is possible to avoid using single facial elements to cause the result inaccurate or accurate Property low problem, the applicable scene of the auth method that the disclosure is provided can also be improved simultaneously.
The 4th kind of first aspect of the disclosure may in implementation, the plurality of loss function at least include for Detect the first-loss function of eye state, for detecting the second loss function of mouth state and for detecting head state 3rd loss function.
By using the state of different loss function detection face different parts, it is possible to avoid using different grader to carry out Classify and cause computationally intensive problem with the state of acquisition face different parts;And multiple positions of face can also be obtained simultaneously Status information, obtain multiple facial elements the most simultaneously.
In the 5th kind of possible implementation of the first aspect of the disclosure, described first-loss function and described second damages Losing function is two classification functions, and described 3rd loss function is three classification functions.
Different classification results is obtained, it is possible to targetedly the different parts of face is carried out by different loss functions Detection.
The 6th kind of first aspect of the disclosure may in implementation, when in described at least two facial images at least When there is the described appointment image of predetermined number, it is verified, including:
When described at least two facial images at least exist the facial image being in eyes-open state of predetermined number, test Card passes through;Or,
When described at least two facial images at least exist the facial image being in the state of opening one's mouth of predetermined number, test Card passes through;Or,
When the head that at least there is predetermined number in described at least two facial images is in left-hand rotation or the people of right turn state During face image, it is verified.
By detect whether these at least two facial images exist this appointment image realize live body verify, when this at least two When there is this appointment image in facial image, it is possible to quickly determine that this face is living body faces, and then be verified so that eventually Hold and perform corresponding operating or display corresponding operating interface according to the result.
The 7th kind of first aspect of the disclosure may in implementation, the plurality of facial elements and face characteristic with 197 dimensional vectors represent, described 197 dimensional vectors include the two-dimensional coordinate of 95 human face characteristic points, two dimension eye status information, two dimension Mouth status information and three-dimensional head status information.
By using a multi-C vector to export the plurality of facial elements and face characteristic, it is possible to realization obtains multiple simultaneously Facial elements and the purpose of face characteristic, and the clean and tidy definition of result can be improved, it is to avoid the situation that multiple results are obscured.
Second aspect according to disclosure embodiment, it is provided that a kind of authentication means, described device includes:
Image collection module, for obtaining at least two facial images of photographic head Real-time Collection;
Image processing module, for collect every facial image carry out image procossing, obtain described every people Multiple facial elements in face image and face characteristic;
Determine module, for according to the multiple facial elements in described every facial image, determine described at least two people Whether there is appointment image in face image, described appointment image is to specify facial elements to meet default in the plurality of facial elements The facial image of standard;
Authentication module, for when there is described appointment image in described at least two facial images, is verified.
By use comprise multiple loss function checking model Real-time Collection at least two facial images at Reason, can not only export the positional information of human face characteristic point, it is also possible to extract multiple facial elements, it is possible to avoid using not simultaneously Different facial elements are extracted the computationally intensive problem caused by same grader, additionally, due to checking model is corresponding Treat that training pattern comprises multiple loss function, therefore, it is possible to the checking model making training obtain has higher precision with accurate Degree, and then ensure that the multiple facial elements and the accuracy of face characteristic extracted.
In the first possible implementation of the second aspect of the disclosure, described image processing module is used for:
By described every facial image input validation model, obtain multiple facial elements and the people of described every facial image Face feature, described checking model for extract the facial elements in facial image and analyze face characteristic, described checking model by Convolutional neural networks and multiple loss function are constituted.
At least two facial images collected are processed by the checking model comprising multiple loss function by use, The positional information of human face characteristic point can not only be exported, it is also possible to extract multiple facial elements, it is possible to avoid using difference simultaneously Grader different facial elements are extracted the computationally intensive problem caused.
In the possible implementation of the second of the second aspect of the disclosure, described device also includes:
Amount of images acquisition module, the picture number of the described appointment image at least two facial images described in obtaining Amount;
Described detection module is additionally operable to whether detect described amount of images more than predetermined number;
Described authentication module is additionally operable to, when described amount of images is more than described predetermined number, be verified;When described figure When being not more than described predetermined number as quantity, checking is not passed through.
In the third possible implementation of the second aspect of the disclosure, the plurality of facial elements at least includes eye Status information, mouth status information, head state information, nose status information.
By obtaining multiple facial elements, it is possible to avoid using single facial elements to cause the result inaccurate or accurate Property low problem, the applicable scene of the auth method that the disclosure is provided can also be improved simultaneously.
The 4th kind of second aspect of the disclosure may in implementation, the plurality of loss function at least include for Detect the first-loss function of eye state, for detecting the second loss function of mouth state and for detecting head state 3rd loss function.
By using the state of different loss function detection face different parts, it is possible to avoid using different grader to carry out Classify and cause computationally intensive problem with the state of acquisition face different parts;And multiple positions of face can also be obtained simultaneously Status information, obtain multiple facial elements the most simultaneously.
In the 5th kind of possible implementation of the second aspect of the disclosure, described first-loss function and described second damages Losing function is two classification functions, and described 3rd loss function is three classification functions.
Different classification results is obtained, it is possible to targetedly the different parts of face is carried out by different loss functions Detection.
In the 6th kind of possible implementation of the second aspect of the disclosure, described authentication module is used for:
When described at least two facial images at least exist the facial image being in eyes-open state of predetermined number, test Card passes through;Or,
When described at least two facial images at least exist the facial image being in the state of opening one's mouth of predetermined number, test Card passes through;Or,
When the head that at least there is predetermined number in described at least two facial images is in left-hand rotation or the people of right turn state During face image, it is verified.
Live body checking is realized by detecting this appointment image that whether there is predetermined number in these at least two facial images, When these at least two facial images exist this appointment image, it is possible to quickly determine that this face is living body faces, and then checking Pass through so that terminal performs corresponding operating or display corresponding operating interface according to the result.
The 7th kind of second aspect of the disclosure may in implementation, the plurality of facial elements and face characteristic with 197 dimensional vectors represent, described 197 dimensional vectors include the two-dimensional coordinate of 95 human face characteristic points, two dimension eye status information, two dimension Mouth status information and three-dimensional head status information.
By using a multi-C vector to export the plurality of facial elements and face characteristic, it is possible to realization obtains multiple simultaneously Facial elements and the purpose of face characteristic, and the clean and tidy definition of result can be improved, it is to avoid the situation that multiple results are obscured.
The third aspect, additionally provides a kind of authentication means, including:
Processor;
For storing the memorizer of the executable instruction of processor;
Wherein, this processor is configured to:
Obtain at least two facial images of photographic head Real-time Collection;
Every the facial image collected is carried out image procossing, obtains the multiple face units in described every facial image Element and face characteristic;
According to the multiple facial elements in described every facial image, determine described in whether at least two facial images are deposited Specifying image, described appointment image is to specify facial elements to meet the face figure of preset standard in the plurality of facial elements Picture;
When described at least two facial images exist described appointment image, it is verified.
The technical scheme that disclosure embodiment provides has the benefit that
At least two facial images that the checking model Real-time Collection that the disclosure comprises multiple loss function by use arrives Process, the positional information of human face characteristic point can not only be exported, it is also possible to extract multiple facial elements, it is possible to avoid simultaneously Use different graders that different facial elements are extracted the computationally intensive problem caused, additionally, due to checking model Corresponding treats that training pattern comprises multiple loss function, therefore, it is possible to the checking model making training obtain have higher precision and Accuracy, and then ensure that the multiple facial elements and the accuracy of face characteristic extracted.
It should be appreciated that it is only exemplary and explanatory, not that above general description and details hereinafter describe The disclosure can be limited.
Accompanying drawing explanation
Accompanying drawing herein is merged in description and constitutes the part of this specification, it is shown that meet the enforcement of the disclosure Example, and for explaining the principle of the disclosure together with description.
Fig. 1 is the flow chart according to a kind of auth method shown in an exemplary embodiment;
Fig. 2 A is the flow chart according to a kind of auth method shown in an exemplary embodiment;
Fig. 2 B be according to shown in an exemplary embodiment a kind of CNN (Convolutional Neural Networks, Convolutional neural networks) network design figure;
Fig. 2 C is according to one CNN the to be trained model schematic shown in an exemplary embodiment;
Fig. 3 is according to a kind of authentication means block diagram shown in an exemplary embodiment;
Fig. 4 is the block diagram according to a kind of authentication means 400 shown in an exemplary embodiment.
Detailed description of the invention
For making the purpose of the disclosure, technical scheme and advantage clearer, below in conjunction with accompanying drawing to disclosure embodiment party Formula is described in further detail.
Here will illustrate exemplary embodiment in detail, its example represents in the accompanying drawings.Explained below relates to During accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawings represents same or analogous key element.Following exemplary embodiment Described in embodiment do not represent all embodiments consistent with the disclosure.On the contrary, they are only with the most appended The example of the apparatus and method that some aspects that described in detail in claims, the disclosure are consistent.
Fig. 1 is the flow chart according to a kind of auth method shown in an exemplary embodiment, as it is shown in figure 1, identity Verification method, in terminal, comprises the following steps.
In a step 101, at least two facial images of photographic head Real-time Collection are obtained.
In a step 102, every the facial image collected is carried out image procossing, obtain in described every facial image Multiple facial elements and face characteristic.
In step 103, according to the multiple facial elements in described every facial image, at least two faces described in detection Whether there is appointment image in image, described appointment image is to specify facial elements to meet pre-bidding in the plurality of facial elements Accurate facial image.
At step 104, when described at least two facial images at least exist the described appointment image of predetermined number, It is verified.
The method that disclosure embodiment provides, the checking model Real-time Collection comprising multiple loss function by use arrives At least two facial images process, and can not only export the positional information of human face characteristic point, it is also possible to extract multiple simultaneously Facial elements, it is possible to avoid using different graders that different facial elements are extracted the computationally intensive problem caused, Additionally, due to checking model corresponding treat that training pattern comprises multiple loss function, therefore, it is possible to make the checking mould that training obtains Type has higher precision and an accuracy, and then ensure that the accurate of the multiple facial elements extracted and face characteristic Degree.
In the first possible implementation of the disclosure, every the facial image collected is carried out image procossing, Obtain the multiple facial elements in described every facial image and face characteristic information, including:
By described every facial image input validation model, obtain multiple facial elements and the people of described every facial image Face feature, described checking model for extract the facial elements in facial image and analyze face characteristic, described checking model by Convolutional neural networks and multiple loss function are constituted.
In the possible implementation of the second of the disclosure, according to the multiple facial elements in described every facial image, After whether there is appointment image at least two facial images described in detection, described method also includes:
The amount of images of the described appointment image at least two facial images described in acquisition;
Whether detect described amount of images more than predetermined number;
When described amount of images is more than described predetermined number, it is verified;
When described amount of images is not more than described predetermined number, checking is not passed through.
The disclosure the third may in implementation, the plurality of facial elements at least include eye status information, Mouth status information, head state information, nose status information..
In the 4th kind of possible implementation of the disclosure, the plurality of loss function at least includes for detecting eye shape The first-loss function of state, for detecting the second loss function of mouth state and for detecting the 3rd loss letter of head state Number.
In the 5th kind of possible implementation of the disclosure, described first-loss function and described second loss function are two Classification function, described 3rd loss function is three classification functions.
In the 6th kind of possible implementation of the disclosure, when described at least two facial images at least exist present count During the described appointment image measured, it is verified, including:
When described at least two facial images at least exist the facial image being in eyes-open state of predetermined number, test Card passes through;Or,
When described at least two facial images at least exist the facial image being in the state of opening one's mouth of predetermined number, test Card passes through;Or,
When the head that at least there is predetermined number in described at least two facial images is in left-hand rotation or the people of right turn state During face image, it is verified.
In the 7th kind of possible implementation of the disclosure, the plurality of facial elements and face characteristic are with 197 dimensional vectors Representing, described 197 dimensional vectors include the two-dimensional coordinate of 95 human face characteristic points, two dimension eye status information, two dimension mouth state Information and three-dimensional head status information.
Above-mentioned all optional technical schemes, can use and arbitrarily combine the alternative embodiment forming the disclosure, at this no longer Repeat one by one.
Fig. 2 is the flow chart according to a kind of auth method shown in an exemplary embodiment.The execution of this embodiment Main body can be terminal, and with reference to Fig. 2, this embodiment specifically includes:
In step 201, at least two facial images of photographic head Real-time Collection are obtained.
When terminal detects that user is carrying out or will perform sensitive operation, automatically turn on photographic head, by imaging Head carries out Real-time Collection to facial image, to reach the purpose verifying user identity;Wherein, this sensitive operation can be wrapped Including on-line payment, transfer accounts on the net and user profile amendment etc. operates.Such as, user's triggering to payment options is detected when terminal During operation, automatically turn on photographic head, and remind user that photographic head is directed at its facial zone, so that this photographic head can be complete Collect the Zone Full of this face.
It should be noted that the auth method that disclosure embodiment is provided is for carrying out live body checking to face, Therefore, after photographic head is opened, terminal can also remind user to make different expressions, so that terminal can be done user Carry out image acquisition during different expressions, and then whether the face that checking is collected is live body.User is carried out by terminal The mode reminded can be voice reminder, and such as, speech play " please be blinked ", " please open one's mouth " or " please shake the head " etc., it is also possible to lead to User is reminded by the mode at terminal screen display word of crossing, and carries user in other ways it is, of course, also possible to adopt Waking up, this alerting pattern is not especially limited by disclosure embodiment.
By at least two facial images of photographic head Real-time Collection, it is possible to realize at least two the face figures collected As carrying out image procossing respectively, to reach the purpose of live body checking.
In step 202., every the facial image collected is carried out image procossing, obtains in this every facial image Multiple facial elements and face characteristic.
The plurality of facial elements refers to the face respectively state in which information of face, such as, eye be in closed-eye state, Mouth is in the state of opening one's mouth etc.;This face characteristic refers to the characteristic point position information of face.In the disclosed embodiments, the plurality of Facial elements at least includes eye status information, mouth status information, head state information, nose status information, it is also possible to bag Include other information such as decree stricture of vagina information;This face characteristic can be the positional information of 95 characteristic points, and this positional information is with coordinate Form represent.
The concrete grammar that every facial image in these at least two facial images carries out image procossing can be: should At least two facial image input validation models, to obtain multiple faces of every facial image in these at least two facial images Element and face characteristic, this checking model is for extracting the facial elements in facial image and analyzing face characteristic, this checking mould Type is made up of convolutional neural networks and multiple loss function.Specifically, for every face in these at least two facial images Image, first passes through Face datection algorithm and determines face location from facial image, is being determined the people in this facial image by CNN Face characteristic point position, and obtain the positional information of this human face characteristic point, the face characteristic exported by degree of depth learning algorithm is divided Do not input the plurality of loss function, to obtain the multiple facial elements in this facial image.
Wherein, this Face datection algorithm can be Adaboost algorithm, that is to say and is obtained often by Adaboost grader Open the face number in image.Specifically, any image is divided by this Adaboost grader, obtains the multiple of this image Region, each region that division is obtained, according to default feature extraction algorithm, extract the feature in this region, by this region Feature inputs to this grader, based on this grader, calculates the feature in this region, obtains the output knot of this grader Really, i.e. can obtain the classification results in this region, this classification results is human face region or non-face region.
This Adaboost grader is made up of multiple Weak Classifiers, and the plurality of Weak Classifier is assembled for training based on same training sample Practice and form.For example, it is possible to obtain the weak feature of multiple sample image, such as rectangular characteristic etc., by the weak feature of each sample image As training sample, by multiple training sample composing training sample sets.Concentrate from this training sample and choose several training samples, Constitute the first training set, according to this first training set, train first Weak Classifier, if then choosing from this training sample concentration Dry new training sample, the training sample of the training sample this chosen and first Weak Classifier misclassification constitutes the second instruction Practice collection, according to this second training set, train second Weak Classifier, then choose several new instructions from this training sample concentration Practice sample, the training sample this chosen and first Weak Classifier and the training sample structure of second equal misclassification of Weak Classifier Become the 3rd training set, according to the 3rd training set, train the 3rd Weak Classifier, by that analogy, until error rate is less than pre- If during minimal error rate, multiple Weak Classifiers one strong classifier of composition that will train, this strong classifier can be used for image Classify.
CNN, as the one of artificial neural network, has become the study hotspot of field of image recognition the most.CNN is for identifying Two-dimensional shapes and a multilayer perceptron of particular design, this network structure to translation, tilt or the deformation of other forms There is height invariance.The weights of CNN are shared network structure and are allowed to be more closely similar to biological neural network, substantial amounts of defeated by study Mapping relations between entering and exporting, it is not necessary to accurate mathematic(al) representation between any input and output, the most exportable process is tied Really, thus significantly reduce the complexity of network model, decrease the quantity of weights.Especially input as multi-dimensional map at network During picture, becoming apparent from of the advantage of CNN performance, it is to avoid complicated feature extraction and data reconstruction in traditional images recognizer Process.
Fig. 2 B is according to a kind of CNN network design figure shown in an exemplary embodiment.From Fig. 2 B, CNN network by 1 Individual input layer and 7 training layer compositions, 7 training layers are respectively C1 layer, S2 layer, C3 layer, S4 layer, C5 layer, F6 layer and output layer, Wherein, C1, C3, C5 layer is convolutional layer, for by convolution algorithm, strengthening the feature of original image, reduces noise;S2, S4 are Down-sampling layer, is used for utilizing image local correlation principle, image is carried out sub-sample, to reduce data processing amount and to remain with Effect feature.Wherein, the image size that input layer is inputted is 32*32.Each training layer has multiple characteristic image, each feature Image is a kind of feature being extracted input by a kind of convolution filter, and each characteristic image has multiple neuron.It addition, Each training layer all comprises multiple parameter to be trained.
Wherein, C1 layer is first order convolutional layer, is made up of the characteristic image that 6 sizes are 28*28.In each characteristic image Each neuron be connected with the neighborhood of 5*5 in input picture.In C1 layer, each wave filter has 5*5=25 filter parameter And 1 bias parameter, 6 wave filter have (5*5+1) * 6=156 parameter to be trained altogether.Wait to train for 156 Parameter, there is 156* (28*28)=122304 connection altogether.
S2 layer is first order down-sampling layer, is made up of the characteristic image of 6 14*14.Each list in each characteristic image Unit is connected with the 2*2 neighborhood of corresponding characteristic image in C1 layer.It addition, S2 layer has 12 parameters to be trained and 5880 companies Connect.
C3 layer is second level convolutional layer, is made up of the characteristic image of 16 10*10.Wherein, the characteristic image of 10*10 is logical Cross the convolution kernel of 5*5 down-sampling layer S2 is carried out convolutional calculation to obtain.Whole with S2 layer of each characteristic image in C3 layer Or Partial Feature image is connected, that is to say, the combination of the characteristic image by being extracted in S2 layer of the characteristic image in C3 layer.
S4 layer is second level down-sampling layer, is made up of the characteristic image of 16 5*5.Each unit in each characteristic pattern with In C3 layer, the 2*2 neighborhood of individual features image is connected.It addition, S4 layer has 32 parameters to be trained and 2000 connections.
C5 layer is second level convolutional layer, is made up of 120 characteristic images.Each unit in each characteristic image and S4 layer In 16 characteristic images 5*5 neighborhood be connected.Owing to the size of the characteristic image of S4 layer is also 5*5, therefore, the characteristic pattern of C5 layer As the ratio of the characteristic image with S4 layer should be 1*1, that is to say and connect for complete between S4 layer and C5 layer.It addition, C5 layer has 48120 Individual connection to be trained.
F6 layer is made up of 84 characteristic images, with between C5 layer for be entirely connected.It addition, F6 layer have 10164 to be trained Parameter.
Output layer is made up of RBF (Radial Basis Function, RBF) unit, and each RBF unit is used for Calculating the Euclidean distance between input vector and output vector, the Euclidean distance between input vector and output vector is the biggest, RBF The output of unit is the biggest.The output of RBF unit is for weighing input vector and a model with RBF unit associated classes Join the penalty term of degree.On theory of probability, the output of RBF unit may be considered the negative of the Gauss distribution of F6 layer configuration space log-likelihood.An any given input vector, loss function should be able to make the configuration of F6 layer enough with RBF output vector Close.
In another embodiment of the disclosure, the plurality of loss function at least includes the first-loss for detecting eye state Function, for detecting the second loss function of mouth state and for detecting the 3rd loss function of head state;Wherein, this One loss function, the second loss function and the 3rd loss function can be Softmax function respectively, it is also possible to it is many to be that other have The function of classification feature, this is not especially limited by disclosure embodiment.
In the disclosed embodiments, this first-loss function and this second loss function are two classification functions, and the 3rd damages Losing function is three classification functions.Specifically, the output of the first-loss function for detecting eye state includes being in eye opening shape Probability of state and the probability being in closed-eye state, the output of the second loss function for detecting mouth state includes being in opens one's mouth Shape probability of state and be in shape probability of state of shutting up, the output of the 3rd loss function for detecting head state includes that face is just The probability of photographic head, head are in the probability of left turn state and head is in the probability of right turn state.
Certainly, this first-loss function, the second loss function and the 3rd loss function can also obtain two or three Above classification results, such as, the output of the first classification function for detecting mouth state can include being in eyes-open state Probability, the probability of half eyes-open state and the probability etc. of eyes-open state.Additionally, this first-loss function, the second loss function and 3rd loss function can also be respectively used to extract other facial elements, and disclosure embodiment is to above three loss function Feature extraction target and obtained testing result are all not especially limited.
It should be noted that the plurality of loss function also includes one for the 4th loss returning 95 characteristic point coordinates Function, the 4th loss function is for making the coordinate of 95 characteristic points of the coordinate of 95 characteristic points and the demarcation finally given Between error minimum, the 4th loss function can be Euclidean distance function.
Certainly, the plurality of loss function could be included for detecting the loss function of other facial elements, such as, is used for Detection nose sum-of-states method makes the 5th loss function and the 6th loss function of stricture of vagina, and the 5th loss function can be two classification letters Number, output includes that nose is in flat condition probability of state and purses up shape probability of state with being in;6th loss function can be three Many classification functions that classification is above, output includes that decree stricture of vagina is in the probability of different depth, and when user laughs at when, decree stricture of vagina is relatively Deeply, when user is in normal amimia state, decree stricture of vagina is shallower.The concrete inspection to the plurality of loss function of the disclosure embodiment Survey region and output type etc. to be all not construed as limiting.
In the another embodiment of the disclosure, the plurality of facial elements and face characteristic represent with 197 dimensional vectors, this 197 dimension Vector includes the two-dimensional coordinate of 95 human face characteristic points, two dimension eye status information, two dimension mouth status information and three-dimensional head Status information.Specifically, the 1st dimension to the 190th dimension table of this 197 dimensional vector shows the two-dimensional coordinate of these 95 human face characteristic points, the 191 dimension to the 192nd dimension tables show two dimension eye status information, the 193rd dimension to the 194th dimension table show two dimension mouth status information, the 195th Dimension to the 197th dimension table shows three-dimensional head status information;Certainly, the particular content that this 197 dimensional vector is corresponding can be by developer Being configured, this is not especially limited by disclosure embodiment.
It should be noted that different according to the classification results that loss function each in the plurality of loss function is corresponding, gained The vector dimension that the plurality of facial elements that arrives is corresponding with face characteristic is the most different.
In disclosure another embodiment, said process is to use checking model that facial image is carried out image procossing, with Obtain the multiple facial elements in facial image and the process of face characteristic, before stating process in realization, need by CNN and The initial model that multiple loss functions are constituted is trained, to obtain this checking model, it should be noted that to this initial model Training process be and determine the process of the model parameter of CNN in this initial model, concrete training method can be: can first give One original model parameter of CNN model initialization to be trained, will have the facial image of different facial elements in a large number as training Sample, is divided into many groups by this training sample, for often organizing training sample, this group training sample inputs this initial model, will To result and calibration result carry out contrasting to obtain error amount, if error amount is more than predetermined threshold value, then pass through back propagation The model parameter of regulation CNN, until the error amount obtained is less than or equal to this predetermined threshold value;If error amount is less than or equal to being somebody's turn to do Predetermined threshold value, then input next group training sample and continue training, until the training of all training samples terminates, be verified model. Wherein, often group training sample can comprise the facial image of predetermined number, and the concrete numerical value of this predetermined number can be by exploit person Member is configured as required, and numerical value and the method to set up of this predetermined number are all not especially limited by disclosure embodiment.
Wherein, CNN model to be trained generally includes at least two-stage convolutional layer and at least full articulamentum of one-level, and every grade of convolution Layer includes that multiple convolution kernel and multiple bias matrix, every grade of full articulamentum include multiple weight matrix and multiple bias vector, because of This, the model parameter got includes the initial convolution kernel of convolutional layer at different levels, the initial bias matrix of convolutional layer at different levels, entirely connects The initial weight matrix of layer and the initial bias vector of full articulamentum.Quantity about the convolutional layer that CNN model to be trained includes With the quantity of full articulamentum, disclosure embodiment is not especially limited.When being embodied as, can set as required.Such as, as Shown in Fig. 2 C, the figure shows the schematic diagram of a kind of CNN model to be trained.CNN model to be trained shown in Fig. 2 C includes Pyatyi Convolutional layer and the full articulamentum of two-stage.
Further, the convolution kernel included about every grade of convolutional layer and the quantity of bias matrix, and every grade of full articulamentum Including weight matrix and the quantity of bias vector, disclosure embodiment is not especially limited.It addition, disclosure embodiment is same Not to each convolution kernel and the dimension of bias matrix, and the dimension of each weight matrix and each bias vector is defined. When being embodied as, convolution kernel that every grade of convolutional layer includes and the quantity of bias matrix and dimension thereof, and every grade connect weight square entirely Battle array and the quantity of bias vector and dimension, all can take empirical value.
In conjunction with foregoing, obtaining in time training the original model parameter of CNN model, can be in specifying numerical range Randomly select a value as the value of each element in original model parameter.Such as, for each initial convolution kernel, initially weigh Each element in weight matrix, initial bias matrix and initial bias vector, can take one at random in [-r, r] interval Number.Herein, r is the threshold value of initialization model parameter, and it can be empirical value.Such as, r can take 0.001.
In another embodiment of the disclosure, in this checking model, the image size parameter of the ground floor input of CNN can set It is set to 64*64, when this initial model maybe this checking model will be inputted until training image or image to be verified, by big for image ditty Whole for 64*64.
By the image size parameter of the ground floor input of CNN in this checking model is set to 64*64, it is possible to avoid When this parameter is 224*224, parts of images can only be verified by this checking model, or can only ensure to test parts of images Card accuracy, and then the scope of application of this checking model can be improved.
The checking model at least two faces to getting in real time in step 201 of multiple loss function are comprised by use Image processes, and can not only export the positional information of human face characteristic point, it is also possible to extract multiple facial elements simultaneously, it is possible to Avoid using different graders that different facial elements are extracted the computationally intensive problem caused;Additionally, due to checking The initial model that model is corresponding comprises multiple loss function, therefore, it is possible to the checking model making training obtain has higher precision And accuracy, and then ensure that the multiple facial elements and the accuracy of face characteristic extracted.
In step 203, according to the multiple facial elements in this every facial image, determine this at least two facial images In whether there is appointment image, this appointment image is to specify facial elements to meet the face of preset standard in the plurality of facial elements Image.When these at least two facial images exist this appointment image, perform step 204;When these at least two facial images In when there is not this appointment image, perform step 205.
User can be carried out during photographic head Real-time Collection facial image by this appointment image according in step 201 Information determine;Such as, when information prompting user performs action nictation, this appointment image can be to be in eye closing The facial image of state, i.e. this appointment facial elements are eye status information, and correspondingly, this preset standard is that eyes are in eye closing State;When information prompting user performs to open one's mouth action, this appointment image can be the facial image being in the state of opening one's mouth, This appointment facial elements i.e. is mouth status information, and correspondingly, this preset standard is that face is in the state of opening one's mouth;Work as information When prompting user performs head shaking movement, this appointment image can be that head is in left-hand rotation or the facial image of right turn state, i.e. should Specifying facial elements is head state information, and correspondingly, this preset standard is that head is in left-hand rotation or right turn state.
If terminal is not play or non-display reminding information, this appointment image can be any surface in the plurality of facial elements Portion's element is in the facial image of corresponding states, such as, when obtained facial elements includes eye status information, mouth state When information, head state information and nose status information, this appointment image can be in closed-eye state or be in the state of opening one's mouth Or any one image that head is in left-hand rotation or right turn state or nose are in the facial image of the state of pursing up, i.e. this given side Portion's element can be any one shape in eye status information, mouth status information, head state information or nose status information State information, correspondingly, this preset standard is any one status information corresponding with specifying facial elements.
This preset standard can determine according to this information, it is also possible to the multiple facial elements according to being extracted are carried out Determining, this is not especially limited by disclosure embodiment.
It should be noted that when the plurality of facial elements and face characteristic represent with 197 dimensional vectors, this 197 dimensional vector The two Dimension Numerical Value sum of middle expression eye status information is 1, and the state of the bigger correspondence of probability is the eye in this facial image State;Such as, in the extraction result of the facial elements of arbitrary facial image, the two Dimension Numerical Value corresponding when eye state is (0.7,0.3), and the positional representation of 0.7 correspondence is in the probability of closed-eye state, the positional representation of 0.3 correspondence is in eyes-open state Probability, then the human eye in this facial image is in closed-eye state;The extraction result of other facial elements is in like manner.
In step 204, when these at least two facial images exist this appointment image, it is verified.
When these at least two facial images exist this appointment image, represent that this face is living body faces, be verified, Terminal continues executing with corresponding operating or display corresponding operating interface.
Specifically, in the disclosed embodiments, when there is this appointment image in these at least two facial images, checking is logical Cross, including: when these at least two facial images exist the facial image being in eyes-open state, it is verified;Or, when this extremely When few two facial images exist the facial image being in the state of opening one's mouth, it is verified;Or, when these at least two facial images The middle head that exists when being in left-hand rotation or the facial image of right turn state, is verified.
In another embodiment of the disclosure, when these at least two facial images exist this appointment image, obtain this extremely The amount of images of this appointment image in few two facial images;Whether detect this amount of images more than predetermined number;When this figure During as quantity more than this predetermined number, it is verified;When this amount of images is not more than this predetermined number, checking is not passed through.Its In, this predetermined number could be arranged to arbitrary any value less than these at least two facial image quantity, disclosure embodiment Method to set up and concrete numerical value to this predetermined number are all not construed as limiting.
By when the amount of images of this appointment image is more than predetermined number, current user authentication being passed through, it is possible to avoid There is causing during error the situation of validation failure during extracting in facial elements, and then can improve disclosure embodiment and provided The safety of verification method.
In step 205, when there is not this appointment image in these at least two facial images, checking is not passed through.
When these at least two facial images do not exist this appointment image, represent that this face is not living body faces, checking Do not pass through, terminal continuation termination execution corresponding operating, and show the information of authentication failed, or re-execute step 201 Step after and, causes facial elements and face characteristic to extract inaccurate feelings to avoid makeing mistakes in image acquisition process Condition.
In another embodiment of the disclosure, when the number of times of repeated execution of steps 201 and later step thereof is more than preset times After, locking terminal or current application preset duration, to reach to protect further the purpose of privacy of user and property safety.Wherein, This preset times and this preset duration can be arranged by developer, it is also possible to arranged the most voluntarily by user, the disclosure This is not construed as limiting by embodiment;Similarly, disclosure embodiment is to the concrete numerical value of this preset times and the tool of this preset duration Body numerical value is all not construed as limiting.
The method that disclosure embodiment provides, the checking model Real-time Collection comprising multiple loss function by use arrives At least two facial images process, and can not only export the positional information of human face characteristic point, it is also possible to extract multiple simultaneously Facial elements, it is possible to avoid using different graders that different facial elements are extracted the computationally intensive problem caused, Additionally, due to checking model corresponding treat that training pattern comprises multiple loss function, therefore, it is possible to make the checking mould that training obtains Type has higher precision and an accuracy, and then ensure that the accurate of the multiple facial elements extracted and face characteristic Degree;Further, by when the amount of images of this appointment image is more than predetermined number, current user authentication being passed through, it is possible to Avoid facial elements during extracting, to occur causing during error the situation of validation failure, and then the safety of verification method can be improved Property.
Fig. 3 is according to a kind of authentication means block diagram shown in an exemplary embodiment.With reference to Fig. 3, this device includes Image collection module 301, image processing module 302, determine module 303 and authentication module 304.
Image collection module 301, for obtaining at least two facial images of photographic head Real-time Collection;
Image processing module 302, for every the facial image collected is carried out image procossing, obtains described every people Multiple facial elements in face image and face characteristic;
Determine module 303, for according to the multiple facial elements in described every facial image, determine described at least two Whether there is appointment image in facial image, described appointment image is to specify facial elements to meet pre-in the plurality of facial elements The facial image that bidding is accurate;
Authentication module 304, for when the described appointment figure that at least there is predetermined number in described at least two facial images During picture, it is verified.
In the first possible implementation that the disclosure provides, described image processing module 302 is used for:
By described every facial image input validation model, obtain multiple facial elements and the people of described every facial image Face feature, described checking model for extract the facial elements in facial image and analyze face characteristic, described checking model by Convolutional neural networks and multiple loss function are constituted.
In the possible implementation of the second that the disclosure provides, described device also includes:
Amount of images acquisition module, the picture number of the described appointment image at least two facial images described in obtaining Amount;
Described determine that module 303 is additionally operable to determine that whether described amount of images is more than predetermined number;
Described authentication module 304 is additionally operable to, when described amount of images is more than described predetermined number, be verified;When described When amount of images is not more than described predetermined number, checking is not passed through.
In the third possible implementation that the disclosure provides, the plurality of facial elements at least includes that eye state is believed Breath, mouth status information, head state information, nose status information.
In the 4th kind of possible implementation that the disclosure provides, the plurality of loss function at least includes for detecting eye The first-loss function of portion's state, for detecting the second loss function of mouth state and for detecting the 3rd damage of head state Lose function.
In the 5th kind of possible implementation that the disclosure provides, described first-loss function and described second loss function Being two classification functions, described 3rd loss function is three classification functions.
In the 6th kind of possible implementation that the disclosure provides, described authentication module 304 is used for:
When described at least two facial images at least exist the facial image being in eyes-open state of predetermined number, test Card passes through;Or,
When described at least two facial images at least exist the facial image being in the state of opening one's mouth of predetermined number, test Card passes through;Or,
When the head that at least there is predetermined number in described at least two facial images is in left-hand rotation or the people of right turn state During face image, it is verified.
In the 7th kind of possible implementation that the disclosure provides, the plurality of facial elements and face characteristic are with 197 dimensions Vector representation, described 197 dimensional vectors include the two-dimensional coordinate of 95 human face characteristic points, two dimension eye status information, two dimension mouth Status information and three-dimensional head status information.
About the device in above-described embodiment, wherein modules performs the concrete mode of operation in relevant the method Embodiment in be described in detail, explanation will be not set forth in detail herein.
Fig. 4 is the block diagram according to a kind of authentication means 400 shown in an exemplary embodiment.Such as, device 400 can To be mobile phone, computer, digital broadcast terminal, messaging devices, game console, tablet device, armarium, strong Body equipment, personal digital assistant etc..
With reference to Fig. 4, device 400 can include following one or more assembly: processes assembly 402, memorizer 404, power supply Assembly 406, multimedia groupware 408, audio-frequency assembly 410, input/output (I/O) interface 412, sensor cluster 414, Yi Jitong Letter assembly 416.
Process assembly 402 and generally control the integrated operation of device 400, such as with display, call, data communication, phase The operation that machine operation and record operation are associated.Process assembly 402 and can include that one or more processor 420 performs to refer to Order, to complete all or part of step of above-mentioned method.Additionally, process assembly 402 can include one or more module, just Mutual in process between assembly 402 and other assemblies.Such as, process assembly 402 and can include multi-media module, many to facilitate Media component 408 and process between assembly 402 mutual.
Memorizer 404 is configured to store various types of data to support the operation at device 400.Showing of these data Example includes any application program for operation on device 400 or the instruction of method, contact data, telephone book data, disappears Breath, picture, video etc..Memorizer 404 can be by any kind of volatibility or non-volatile memory device or their group Close and realize, such as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), erasable compile Journey read only memory (EPROM), programmable read only memory (PROM), read only memory (ROM), magnetic memory, flash Device, disk or CD.
The various assemblies that power supply module 406 is device 400 provide electric power.Power supply module 406 can include power management system System, one or more power supplys, and other generate, manage and distribute, with for device 400, the assembly that electric power is associated.
The screen of one output interface of offer that multimedia groupware 408 is included between described device 400 and user.One In a little embodiments, screen can include liquid crystal display (LCD) and touch panel (TP).If screen includes touch panel, screen Curtain may be implemented as touch screen, to receive the input signal from user.Touch panel includes one or more touch sensing Device is with the gesture on sensing touch, slip and touch panel.Described touch sensor can not only sense touch or sliding action Border, but also detect the persistent period relevant to described touch or slide and pressure.In certain embodiments, many matchmakers Body assembly 408 includes a front-facing camera and/or post-positioned pick-up head.When device 400 is in operator scheme, such as screening-mode or During video mode, front-facing camera and/or post-positioned pick-up head can receive the multi-medium data of outside.Each front-facing camera and Post-positioned pick-up head can be a fixing optical lens system or have focal length and optical zoom ability.
Audio-frequency assembly 410 is configured to output and/or input audio signal.Such as, audio-frequency assembly 410 includes a Mike Wind (MIC), when device 400 is in operator scheme, during such as call model, logging mode and speech recognition mode, mike is joined It is set to receive external audio signal.The audio signal received can be further stored at memorizer 404 or via communication set Part 416 sends.In certain embodiments, audio-frequency assembly 410 also includes a speaker, is used for exporting audio signal.
I/O interface 412 provides interface for processing between assembly 402 and peripheral interface module, above-mentioned peripheral interface module can To be keyboard, put striking wheel, button etc..These buttons may include but be not limited to: home button, volume button, start button and lock Set button.
Sensor cluster 414 includes one or more sensor, for providing the state of various aspects to comment for device 400 Estimate.Such as, what sensor cluster 414 can detect device 400 opens/closed mode, the relative localization of assembly, such as described Assembly is display and the keypad of device 400, and sensor cluster 414 can also detect device 400 or 400 1 assemblies of device Position change, the presence or absence that user contacts with device 400, device 400 orientation or acceleration/deceleration and device 400 Variations in temperature.Sensor cluster 414 can include proximity transducer, is configured to when not having any physical contact detect The existence of neighbouring object.Sensor cluster 414 can also include optical sensor, such as CMOS or ccd image sensor, is used for becoming Use as in application.In certain embodiments, this sensor cluster 414 can also include acceleration transducer, gyro sensors Device, Magnetic Sensor, pressure transducer or temperature sensor.
Communications component 416 is configured to facilitate the communication of wired or wireless mode between device 400 and other equipment.Device 400 can access wireless network based on communication standard, such as WiFi, 2G or 3G, or combinations thereof.An exemplary enforcement In example, communications component 416 receives the broadcast singal from external broadcasting management system or broadcast related information via broadcast channel. In one exemplary embodiment, described communications component 416 also includes near-field communication (NFC) module, to promote junction service.Example As, can be based on RF identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, Bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 400 can be by one or more application specific integrated circuits (ASIC), numeral letter Number processor (DSP), digital signal processing appts (DSPD), PLD (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components realize, be used for performing above-mentioned auth method.
In the exemplary embodiment, a kind of non-transitory computer-readable recording medium including instruction, example are additionally provided As included the memorizer 404 of instruction, above-mentioned instruction can have been performed said method by the processor 420 of device 400.Such as, Described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk With optical data storage devices etc..
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium, when described storage is situated between When instruction in matter is performed by the processor of mobile terminal so that mobile terminal is able to carry out above-mentioned auth method.
Those skilled in the art, after considering description and putting into practice invention disclosed herein, will readily occur to its of the disclosure Its embodiment.The application is intended to any modification, purposes or the adaptations of the disclosure, these modification, purposes or Person's adaptations is followed the general principle of the disclosure and includes the undocumented common knowledge in the art of the disclosure Or conventional techniques means.Description and embodiments is considered only as exemplary, and the true scope of the disclosure and spirit are by following Claim is pointed out.
It should be appreciated that the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and And various modifications and changes can carried out without departing from the scope.The scope of the present disclosure is only limited by appended claim.

Claims (15)

1. an auth method, it is characterised in that described method includes:
Obtain at least two facial images of photographic head Real-time Collection;
Every the facial image collected is carried out image procossing, obtain multiple facial elements in described every facial image and Face characteristic;
According to the multiple facial elements in described every facial image, determine described in whether at least two facial images exist refer to Determining image, described appointment image is to specify facial elements to meet the facial image of preset standard in the plurality of facial elements;
When described at least two facial images at least exist the described appointment image of predetermined number, it is verified.
Method the most according to claim 1, it is characterised in that every the facial image collected is carried out image procossing, Obtain the multiple facial elements in described every facial image and face characteristic information, including:
By described every facial image input validation model, the multiple facial elements and the face that obtain described every facial image are special Levying, described checking model is for extracting the facial elements in facial image and analyzing face characteristic, and described checking model is by convolution Neutral net and multiple loss function are constituted.
Method the most according to claim 1, it is characterised in that the plurality of facial elements at least includes that eye state is believed Breath, mouth status information, head state information, nose status information.
Method the most according to claim 2, it is characterised in that the plurality of loss function at least includes for detecting eye The first-loss function of state, for detecting the second loss function of mouth state and for detecting the 3rd loss of head state Function.
Method the most according to claim 4, it is characterised in that described first-loss function and described second loss function are Two classification functions, described 3rd loss function is three classification functions.
Method the most according to claim 1, it is characterised in that preset when at least existing in described at least two facial images During the described appointment image of quantity, it is verified, including:
When at least there is the facial image being in eyes-open state of predetermined number in described at least two facial images, checking is logical Cross;Or,
When at least there is the facial image being in the state of opening one's mouth of predetermined number in described at least two facial images, checking is logical Cross;Or,
When the head that at least there is predetermined number in described at least two facial images is in left-hand rotation or the face figure of right turn state During picture, it is verified.
Method the most according to claim 1, it is characterised in that the plurality of facial elements and face characteristic with 197 dimensions to Amount represents, described 197 dimensional vectors include the two-dimensional coordinate of 95 human face characteristic points, two dimension eye status information, two dimension mouth shape State information and three-dimensional head status information.
8. an authentication means, it is characterised in that described device includes:
Image collection module, for obtaining at least two facial images of photographic head Real-time Collection;
Image processing module, for every the facial image collected is carried out image procossing, obtains described every facial image In multiple facial elements and face characteristic;
Determine module, for according to the multiple facial elements in described every facial image, determine described at least two face figures Whether there is appointment image in Xiang, described appointment image is to specify facial elements to meet preset standard in the plurality of facial elements Facial image;
Authentication module, for when at least there is the described appointment image of predetermined number in described at least two facial images, tests Card passes through.
Device the most according to claim 8, it is characterised in that described image processing module is used for:
By described every facial image input validation model, the multiple facial elements and the face that obtain described every facial image are special Levying, described checking model is for extracting the facial elements in facial image and analyzing face characteristic, and described checking model is by convolution Neutral net and multiple loss function are constituted.
Device the most according to claim 8, it is characterised in that the plurality of facial elements at least includes that eye state is believed Breath, mouth status information, head state information, nose status information.
11. devices according to claim 9, it is characterised in that the plurality of loss function at least includes for detecting eye The first-loss function of portion's state, for detecting the second loss function of mouth state and for detecting the 3rd damage of head state Lose function.
12. devices according to claim 11, it is characterised in that described first-loss function and described second loss function Being two classification functions, described 3rd loss function is three classification functions.
13. devices according to claim 8, it is characterised in that described authentication module is used for:
When at least there is the facial image being in eyes-open state of predetermined number in described at least two facial images, checking is logical Cross;Or,
When at least there is the facial image being in the state of opening one's mouth of predetermined number in described at least two facial images, checking is logical Cross;Or,
When the head that at least there is predetermined number in described at least two facial images is in left-hand rotation or the face figure of right turn state During picture, it is verified.
14. devices according to claim 8, it is characterised in that the plurality of facial elements and face characteristic with 197 dimensions to Amount represents, described 197 dimensional vectors include the two-dimensional coordinate of 95 human face characteristic points, two dimension eye status information, two dimension mouth shape State information and three-dimensional head status information.
15. 1 kinds of authentication means, it is characterised in that including:
Processor;
For storing the memorizer of the executable instruction of processor;
Wherein, described processor is configured to:
Obtain at least two facial images of photographic head Real-time Collection;
Every the facial image collected is carried out image procossing, obtain multiple facial elements in described every facial image and Face characteristic;
According to the multiple facial elements in described every facial image, determine described in whether at least two facial images exist refer to Determining image, described appointment image is to specify facial elements to meet the facial image of preset standard in the plurality of facial elements;
When described at least two facial images at least exist the described appointment image of predetermined number, it is verified.
CN201610543529.7A 2016-07-11 2016-07-11 Auth method and device Pending CN106169075A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610543529.7A CN106169075A (en) 2016-07-11 2016-07-11 Auth method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610543529.7A CN106169075A (en) 2016-07-11 2016-07-11 Auth method and device

Publications (1)

Publication Number Publication Date
CN106169075A true CN106169075A (en) 2016-11-30

Family

ID=58065929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610543529.7A Pending CN106169075A (en) 2016-07-11 2016-07-11 Auth method and device

Country Status (1)

Country Link
CN (1) CN106169075A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107085733A (en) * 2017-05-15 2017-08-22 山东工商学院 Offshore infrared ship recognition methods based on CNN deep learnings
CN107491759A (en) * 2017-08-21 2017-12-19 厦门中控智慧信息技术有限公司 A kind of mixed mode register method and device
CN107908635A (en) * 2017-09-26 2018-04-13 百度在线网络技术(北京)有限公司 Establish textual classification model and the method, apparatus of text classification
CN108549849A (en) * 2018-03-27 2018-09-18 康体佳智能科技(深圳)有限公司 Pattern recognition system based on neural network and recognition methods
CN108596037A (en) * 2018-03-27 2018-09-28 康体佳智能科技(深圳)有限公司 Face identification system based on neural network and recognition methods
CN108664879A (en) * 2017-03-28 2018-10-16 三星电子株式会社 Face authentication method and apparatus
CN109165627A (en) * 2018-09-11 2019-01-08 广东惠禾科技发展有限公司 A kind of model building method, device and testimony of a witness checking method
CN109995761A (en) * 2019-03-06 2019-07-09 百度在线网络技术(北京)有限公司 Service processing method, device, electronic equipment and storage medium
CN110634219A (en) * 2019-10-22 2019-12-31 软通动力信息技术有限公司 Access control identification system, method, equipment and storage medium
CN111860057A (en) * 2019-04-29 2020-10-30 北京眼神智能科技有限公司 Face image blurring and living body detection method and device, storage medium and equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105184277A (en) * 2015-09-29 2015-12-23 杨晴虹 Living body human face recognition method and device
CN105243386A (en) * 2014-07-10 2016-01-13 汉王科技股份有限公司 Face living judgment method and system
CN105260726A (en) * 2015-11-11 2016-01-20 杭州海量信息技术有限公司 Interactive video in vivo detection method based on face attitude control and system thereof
CN105518713A (en) * 2015-02-15 2016-04-20 北京旷视科技有限公司 Living human face verification method and system, computer program product
CN105678249A (en) * 2015-12-31 2016-06-15 上海科技大学 Face identification method aiming at registered face and to-be-identified face image quality difference
CN105701468A (en) * 2016-01-12 2016-06-22 华南理工大学 Face attractiveness evaluation method based on deep learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243386A (en) * 2014-07-10 2016-01-13 汉王科技股份有限公司 Face living judgment method and system
CN105518713A (en) * 2015-02-15 2016-04-20 北京旷视科技有限公司 Living human face verification method and system, computer program product
CN105184277A (en) * 2015-09-29 2015-12-23 杨晴虹 Living body human face recognition method and device
CN105260726A (en) * 2015-11-11 2016-01-20 杭州海量信息技术有限公司 Interactive video in vivo detection method based on face attitude control and system thereof
CN105678249A (en) * 2015-12-31 2016-06-15 上海科技大学 Face identification method aiming at registered face and to-be-identified face image quality difference
CN105701468A (en) * 2016-01-12 2016-06-22 华南理工大学 Face attractiveness evaluation method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHANPENG ZHANG 等: "Facial Landmark Detection by Deep Multi-task Learning", 《ECCV 2014》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108664879B (en) * 2017-03-28 2023-09-05 三星电子株式会社 Face verification method and device
CN108664879A (en) * 2017-03-28 2018-10-16 三星电子株式会社 Face authentication method and apparatus
CN107085733A (en) * 2017-05-15 2017-08-22 山东工商学院 Offshore infrared ship recognition methods based on CNN deep learnings
CN107491759A (en) * 2017-08-21 2017-12-19 厦门中控智慧信息技术有限公司 A kind of mixed mode register method and device
US10783331B2 (en) 2017-09-26 2020-09-22 Baidu Online Network Technology (Beijing) Co., Ltd. Method and apparatus for building text classification model, and text classification method and apparatus
CN107908635A (en) * 2017-09-26 2018-04-13 百度在线网络技术(北京)有限公司 Establish textual classification model and the method, apparatus of text classification
CN107908635B (en) * 2017-09-26 2021-04-16 百度在线网络技术(北京)有限公司 Method and device for establishing text classification model and text classification
CN108549849A (en) * 2018-03-27 2018-09-18 康体佳智能科技(深圳)有限公司 Pattern recognition system based on neural network and recognition methods
CN108596037A (en) * 2018-03-27 2018-09-28 康体佳智能科技(深圳)有限公司 Face identification system based on neural network and recognition methods
CN109165627A (en) * 2018-09-11 2019-01-08 广东惠禾科技发展有限公司 A kind of model building method, device and testimony of a witness checking method
CN109995761A (en) * 2019-03-06 2019-07-09 百度在线网络技术(北京)有限公司 Service processing method, device, electronic equipment and storage medium
CN109995761B (en) * 2019-03-06 2021-10-19 百度在线网络技术(北京)有限公司 Service processing method and device, electronic equipment and storage medium
CN111860057A (en) * 2019-04-29 2020-10-30 北京眼神智能科技有限公司 Face image blurring and living body detection method and device, storage medium and equipment
CN110634219A (en) * 2019-10-22 2019-12-31 软通动力信息技术有限公司 Access control identification system, method, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN106169075A (en) Auth method and device
KR102142232B1 (en) Face liveness detection method and apparatus, and electronic device
CN105426867B (en) Recognition of face verification method and device
CN106295566B (en) Facial expression recognizing method and device
CN106897658B (en) Method and device for identifying human face living body
CN105631403B (en) Face identification method and device
TWI753271B (en) Resource transfer method, device and system
CN105608425B (en) The method and device of classification storage is carried out to photo
CN106548145A (en) Image-recognizing method and device
CN106295511B (en) Face tracking method and device
CN105654033B (en) Face image verification method and device
CN106204435A (en) Image processing method and device
CN105224924A (en) Living body faces recognition methods and device
CN109241835A (en) Image processing method and device, electronic equipment and storage medium
CN106295515B (en) Determine the method and device of the human face region in image
CN112036331B (en) Living body detection model training method, device, equipment and storage medium
CN107909113A (en) Traffic-accident image processing method, device and storage medium
CN109376631A (en) A kind of winding detection method and device neural network based
CN105426730A (en) Login authentication processing method and device as well as terminal equipment
CN109886080A (en) Human face in-vivo detection method, device, electronic equipment and readable storage medium storing program for executing
CN107886070A (en) Verification method, device and the equipment of facial image
CN107545248A (en) Biological characteristic biopsy method, device, equipment and storage medium
CN105528078B (en) The method and device of controlling electronic devices
CN106295530A (en) Face identification method and device
CN107463903A (en) Face key independent positioning method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20161130