CN107871107A - Face authentication method and device - Google Patents

Face authentication method and device Download PDF

Info

Publication number
CN107871107A
CN107871107A CN201610852196.6A CN201610852196A CN107871107A CN 107871107 A CN107871107 A CN 107871107A CN 201610852196 A CN201610852196 A CN 201610852196A CN 107871107 A CN107871107 A CN 107871107A
Authority
CN
China
Prior art keywords
face
sample
loss
training
certification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610852196.6A
Other languages
Chinese (zh)
Inventor
王洋
张伟琳
陆小军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eyecool Technology Co Ltd
Original Assignee
Beijing Eyecool Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eyecool Technology Co Ltd filed Critical Beijing Eyecool Technology Co Ltd
Priority to CN201610852196.6A priority Critical patent/CN107871107A/en
Publication of CN107871107A publication Critical patent/CN107871107A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The embodiments of the invention provide a kind of face authentication method and device, this method includes:Feature is extracted to multiple facial images to be measured using the convolutional neural networks model for losing supervised training by Classification Loss and certification in advance, obtains multiple face feature vectors of multiple facial images to be measured;Dimensionality reduction is carried out to multiple face feature vectors using principal component analytical method, obtains multiple face feature vectors after dimensionality reduction;Using joint Bayes classifier to after dimensionality reduction multiple face feature vectors carry out classified calculating, determine multiple facial images to be measured whether be same people facial image.The convolution deep layer network model that the embodiment of the present invention loses supervised training by Classification Loss and certification can be when to face authentication, while preferable classification capacity is ensured, moreover it is possible to ensure that the similarity of the face of same user is higher, and reduce amount of calculation.

Description

Face authentication method and device
Technical field
The present invention relates to technical field of face recognition, more particularly to a kind of face authentication method and device.
Background technology
Face authentication is to judge whether image belongs to the method for same person by the Efficient Characterization of two images.Face The one kind of certification as biological feather recognition method, and fingerprint recognition, compare the methods of iris recognition have it is untouchable, non-strong The advantages that property processed.So face authentication method is widely used in fields such as finance, information securities.But due to face The collection of image is highly prone to the influence (such as illumination) of external environment change, and face is non-rigid objects again, therefore, existing Face authentication under real field scape has greatly challenge.
According to the difference of face characteristic manner, existing face authentication method includes the face authentication side based on geometric properties Method.Wherein, the face authentication method based on geometric properties is by extracting face component (such as the portion such as eyes, face, nose Point) geometrical relationship and textural characteristics, in a manner of facial image to be associated with graph model, by face match be converted into one kind The method of graph model matching.But this method based on geometry and/or textural characteristics, deposited in the selection of geometric properties point In larger subjectivity, and precision influence of the degree of accuracy of positioning feature point on this method is very big, so as to there is face authentication The problem of error rate is high.
In order to solve the problems, such as features described above point choose caused by subjectivity it is big and caused by face authentication error rate it is high, Prior art also teaches a kind of face authentication method based on subspace.Wherein, should the face authentication method based on subspace The scheme used for:The coding result of facial image or facial image is projected to by the method linearly or nonlinearly mapped In the subspace of low dimensional, and by the use of the vector in new space as the feature of facial image, finally by measuring similarity It is or distance metric isometry mode is classified to feature, with this judges whether two facial images are same class, i.e., same One people.But the method based on subspace needs preferable face to characterize operator, and single operator tends not to express people The multiple change of face, so, it is this based on processing of the face authentication method of subspace to illumination, posture and expression shape change not Ideal, so as to there is the problem of authentication error of the same person different to factors such as expression or environment.
And in order to ensure the accuracy rate of face authentication and to the expression of same person or the resolution capability of environmental change, A kind of face authentication method based on study is also proposed in the prior art, is somebody's turn to do the face authentication method based on study and is passed through minimum Change the thought of loss function, automatically extract from algorithm the feature for characterizing face, face authentication is realized with this.Wherein, manually Neutral net is exactly a kind of face authentication method based on study therein, and it is mainly by minimizing the damage with target equivalent Function is lost, to learn to obtain face characteristic, and then is used for recognition of face task.Another method based on deep learning with it is artificial Neural net method is similar, but deep learning method has more preferable simulation to human brain, its feature hierarchy higher level obtained, It is more abstract.Although and the method in the prior art based on depth convolutional network can be obtained in recognition of face it is higher accurate Rate, the extraction from facial image that network can also be adaptively are easy to the feature of classification.But due to adding with network depth Deep, its amount of calculation is also quickly increasing.Also, the increasing of parameter is also required to substantial amounts of labeled data and is trained, to prevent There is the problems such as over-fitting.Although in addition, the existing network by single identification signal supervised training can cause its have compared with Good classification capacity, but it cannot be guaranteed the high similarity between of a sort sample.Moreover, used by prior art For triplet-loss methods in training convolutional network, convergence and effect also can excessively rely on negative sample to (non-same The face sample of one people to) selection, negative sample undesirable easily causes algorithm not restrained to choosing.
As can be seen here, in the prior art the face authentication scheme of depth convolutional network in the prevalence of it is computationally intensive and point The problem of class ability can not be taken into account with same class Sample Similarity height.
The content of the invention
Technical problem to be solved of the embodiment of the present invention is to provide a kind of face authentication method and device, existing to solve Computationally intensive and classification capacity and same class similarity present in the face authentication scheme of depth convolutional network in technology The problem of height can not be taken into account.
In order to solve the above problems, according to an aspect of the present invention, the invention discloses a kind of face authentication method, bag Include:
Using in advance by the convolutional neural networks model of Classification Loss and certification loss supervised training to multiple faces to be measured Image zooming-out feature, obtain multiple face feature vectors of the multiple facial image to be measured;
Dimensionality reduction is carried out to the multiple face feature vector using principal component analytical method, obtained more personal after dimensionality reduction Face characteristic vector;
Classified calculating is carried out to the multiple face feature vector after dimensionality reduction using joint Bayes classifier, determines institute State multiple facial images to be measured whether be same people facial image.
According to another aspect of the present invention, the invention also discloses a kind of face authentication device, including:
Extraction module, for using the convolutional neural networks model pair for losing supervised training by Classification Loss and certification in advance Multiple facial image extraction features to be measured, obtain multiple face feature vectors of the multiple facial image to be measured;
Dimensionality reduction module, for carrying out dimensionality reduction to the multiple face feature vector using principal component analysis device, dropped Multiple face feature vectors after dimension;
Sort module, for being divided using joint Bayes classifier the multiple face feature vector after dimensionality reduction Class calculate, determine the multiple facial image to be measured whether be same people facial image.
Compared with prior art, the embodiment of the present invention includes advantages below:
The convolution deep layer network model that the embodiment of the present invention loses supervised training by Classification Loss and certification can be right During face authentication, while preferable classification capacity is ensured, moreover it is possible to ensure that the similarity of the face of same user is higher, and Reduce amount of calculation;And the redundancy between feature can be eliminated by principal component analytical method dimensionality reduction, improve the certification speed of image Degree;And can strengthen the image classification after dimensionality reduction the identification of face characteristic by combining Bayes classifier, ensure face The degree of accuracy of certification.
Brief description of the drawings
Fig. 1 is a kind of step flow chart of face authentication method embodiment of the present invention;
One kind that Fig. 2 is the present invention loses the training that exercised supervision to convolutional neural networks model using Classification Loss and certification Embodiment of the method step flow chart;
Fig. 3 is a kind of schematic diagram of positive sample pair using homing method construction of the embodiment of the present invention;
Fig. 4 is a kind of structured flowchart of face authentication device embodiment of the present invention;
Fig. 5 is the structured flowchart of another face authentication device embodiment of the present invention according to Fig. 4.
Embodiment
In order to facilitate the understanding of the purposes, features and advantages of the present invention, it is below in conjunction with the accompanying drawings and specific real Applying mode, the present invention is further detailed explanation.
First embodiment
Reference picture 1, a kind of step flow chart of face authentication method embodiment of the present invention is shown, can specifically be included Following steps:
Step 101, using in advance by the convolutional neural networks model of Classification Loss and certification loss supervised training to multiple Facial image to be measured extracts feature, obtains multiple face feature vectors of the multiple facial image to be measured;
Wherein, the embodiment of the present invention using the convolutional neural networks model that supervised training is lost by Classification Loss and certification come The extraction of feature is carried out to facial image to be measured, can be obtained to illumination, expression, the feature of attitude lights change robust;Also, adopt With a piecemeal, i.e. a feature is directly extracted to each facial image, without each facial image is divided into polylith, and Recombinant obtains a feature after extracting feature respectively to every piece of human face region, can so reduce amount of calculation.
Also, can be had using the convolutional neural networks model of Classification Loss and certification loss supervised training ensureing it While preferable classification capacity, moreover it is possible to ensure that the characteristic distance of same class sample is sufficiently close, i.e. ensure same user Face similarity it is higher.
Because the image inputted to convolutional neural networks model may not only include face part, also comprising hair, the back of the body The parts such as scape.Therefore, it is also desirable to Face datection is carried out to the image of input.Alternatively, in one embodiment of the application, Before performing step 101, method according to embodiments of the present invention can also include:
First, Face datection is carried out respectively to multiple testing images, determines the human face region of the multiple testing image;
Specifically, such as cascade AdaBoost algorithms (a kind of iterative algorithm) can be used to carry out each testing image Face datection, and the human face region of determination is marked in the form of rectangle frame on each testing image.
Then, positioning feature point is carried out respectively to multiple human face regions, it is determined that multiple positioning of each face characteristic Point;
Specifically, supervision descending method (Supervised Descent Method, SDM) can be used to be treated to each Human face region in altimetric image carries out positioning feature point respectively.Such as in order to determine the eyebrow in human face region, eyes, nose, The features such as mouth (i.e. face characteristic), multiple anchor points can be determined to each feature using this method, so as to each spy Sign is positioned.
Then, the centre spot of each eyes is determined in multiple anchor points of each face characteristic;
Specifically, for a human face region, each feature in its face contains multiple anchor points, but It is here only it needs to be determined that the centre spot of each eyes.Because human eye is symmetrically arranged, and the center of human eye more can It is enough clearly to limit this symmetric relation, thus in order to by a face region alignment into template area, it is only necessary to determine two Two centre spots of individual eyes.
Then, the centre spot of each eyes described in the human face region is aligned to default eyes coordinates respectively Position;
Specifically, because the size of each face to be measured, ratio differ, therefore, in order to realize this model to each Kind of size, ratio facial image certification, it is necessary to using a kind of size and ratio to match with convolutional neural networks model Facial image be authenticated.So need exist for carrying out the centre spot that abovementioned steps determine eyes, so as to treat each Two centre spots of two eyes of the human face region of altimetric image snap to default eyes coordinates position respectively, such as (30,30) and the coordinate position of (30,70), whole human face region can thus be snapped to a default template coordinate In, wherein, the template coordinate includes the above-mentioned coordinate of two eyes.
By way of completing face alignment using this centre spot eyes, alignment step can be simplified, carried The pre-treating speed of high facial image.
Then, similarity transformation is carried out to the human face region after the registration process;To passing through the similar change The human face region after changing is normalized;
Wherein, it is here 100x100 that the size of the human face region after normalization, which is consistent with the standard of template, this Sample, multiple images to be measured can be made to handle the image for identical size, beneficial to follow-up authentication processing.
Finally, the human face region after the normalized is converted into gray level image and forms the people to be measured Face image.
Wherein, the human face region of colour is converted into gray level image, can avoided by color of image is to image authentication institute The interference brought, improve the degree of accuracy of image authentication.
Step 102, dimensionality reduction is carried out to the multiple face feature vector using principal component analytical method, after obtaining dimensionality reduction Multiple face feature vectors;
Wherein, due to having certain correlation and superfluous between the face feature vector that obtains by depth convolutional network It is remaining, so, it is necessary to carry out dimensionality reduction to face feature vector using principal component analytical method before being classified.For example, by The dimension for the characteristic vector that depth convolutional network obtains is 2048 dimensions, preceding the 95% of principal component is taken during principal component analysis, then The new face feature vector that dimension is 404 dimensions can be obtained.
That is, dimensionality reduction is carried out to each face feature vector by principal component analytical method respectively, being total to for feature can be eliminated Linear and redundancy, and then improve the certification speed of image.
Step 103, classification meter is carried out to the multiple face feature vector after dimensionality reduction using joint Bayes classifier Calculate, determine the multiple facial image to be measured whether be same people facial image.
Wherein, for the identification of Enhanced feature, the face feature vector after principal component analysis dimensionality reduction can be entered herein The Bayes classifier classification of row joint.
Joint bayesian algorithm is as follows:
Input feature vector training set { (fij,lij) (i=1,2 ..., mi, j=1,2 ..., N);Wherein, fijAnd lijTable respectively Show the label of the face feature vector after dimensionality reduction and j-th of people's the i-th width facial image.
Initialize covariance matrix SμAnd Sε:
The following process steps a- step c of circulating repetition, until algorithmic statement:Step a, calculating matrix F and G
F=Sε -1 (3)
G=- (miSμ+Sε)-1SμSε -1 (4)
Step b, calculate μiAnd εij:
Step c, according to the μ being calculatediAnd εijUpdate covariance matrix SμAnd Sε:
When algorithmic statement, i.e., in circulating each time, input the S into above-mentioned steps aμExported with step c in this circulation SμBetween the absolute value of difference be less than the first predetermined threshold value, and input S into above-mentioned steps a in this time circulationεFollowed with this The S that step c is exported in ringεBetween the absolute value of difference when being less than the second predetermined threshold value, represent algorithmic statement, the present invention is implemented Example will be according to equation below, according to the S exported after algorithmic statementμAnd SεCalculating matrix F, G and A respectively:
F=Sε -1 (9)
G=- (2Sμ+Sε)-1SμSε -1 (10)
A=(Sμ+Sε)-1-(F+G) (11)
By the log-likelihood ratio r (x for calculating facial image feature1,x2) measure the similarity of two width facial images, most After export:Score r (the x of two width facial images1,x2)
Wherein, f1,f2It is facial image x respectively1And x2Feature after PCA algorithm dimensionality reductions.
That is, a pair of facial image x can be obtained by above-mentioned bayesian algorithm1And x2Score r (x1,x2)。
Then, face authentication is carried out according to the score, specifically, face authentication process is sentenced to a pair of facial images x1,x2Score r (x1,x2) whether it is more than given threshold value T, if score r is more than given threshold value T, illustrate x1,x2To be same The two width facial images of one people;Conversely, then illustrate x1,x2For two width facial images of different people.
Although it should be noted that carried out here by taking a pair of facial images as an example certification explanation, the present invention implement Example is for the quantity of facial image that is authenticated and is not especially limited, due to more than two facial image authentication method with Previous examples are similar, therefore, will not be repeated here.
That is, classified calculating is carried out to the face feature vector after dimensionality reduction using joint Bayes classifier, people can be strengthened The identification of face feature, so as to ensure the degree of accuracy of face authentication.
So, the embodiment of the present invention convolution deep layer network model of supervised training is lost by Classification Loss and certification can be with When to face authentication, while preferable classification capacity is ensured, moreover it is possible to ensure the similarity of the face of same user compared with Height, and reduce amount of calculation;And the certification speed of image can be improved by principal component analytical method dimensionality reduction;And by combining shellfish This grader of leaf can strengthen the image classification after dimensionality reduction the identification of face characteristic, ensure the degree of accuracy of face authentication.
Second embodiment
Reference picture 2, on the basis of above-described embodiment, the present embodiment is discussed further the face authentication method of the present invention.
Before feature extraction is carried out to face using above-mentioned convolutional neural networks model, it is also necessary to using Classification Loss and The training that exercised supervision to the convolutional neural networks model is lost in certification, and Fig. 2 utilizes Classification Loss for one embodiment of the invention The flow chart for the training that exercised supervision to above-mentioned convolutional neural networks model is lost with certification.The supervised training flow can specifically wrap Include following steps:
Step 201, to inputting to the training sample of the convolutional neural networks model to carrying out computing, the training is obtained The face feature vector of sample pair, wherein, the training sample is to for positive sample pair, and selected from the face number for including face mark According to collection;
Wherein, the positive sample can be understood as a kind of determining for user to the face sample pair for same person, face mark Justice, such as face A represent user A, and face mark is A.
Wherein, to inputting to the training sample of the convolutional neural networks model to carrying out computing, the training is obtained During the face feature vector of sample pair, according to one embodiment of present invention, it can be realized by following sub-step:
Sub-step S11, to inputting the positive sample to the convolutional neural networks model to carrying out convolutional calculation respectively, Obtain convolution results;
Specifically, it is assumed that current layer:The input of (l+1) layer isConnect the weight and the of l layers and (l+1) layer (l+1) biasing of layer is respectively Wl+1And bl+1, then the convolution results z of (l+1) layerl+1As shown in formula (13).
Sub-step S12, line activating is entered respectively to the convolution results of the positive sample pair, the convolution knot after being activated Fruit;
Specifically, ReLU activation primitive activation is carried out to above-mentioned convolution results, then can obtains the output x of this layerl+1
Sub-step S13, down-sampling is carried out to the convolution results after activation to the positive sample, obtains the positive sample To face feature vector.
Specifically, in order that convolution obtains, feature is more abstract and sparse, the convolution results after being activated to this layer Maximum pond (Max-Pooling) down-sampling, Max-Pooling operator definitions are as follows:
Wherein, yiRepresent to neuron xiCarry out the result that the not overlapping regional area down-sampling of s × s sizes obtains.
Alternatively, in order to prevent from occurring the problems such as gradient disperse in the training process of network, before convolution results activation The embodiment of the present invention can also use batch standardization (Batch Normalization) operation.
For example, the convolutional neural networks model includes q layers, above-mentioned S11~S13 computings will be carried out in each layer, and on One layer of output result can pass to next layer and be used as input results, continue above-mentioned calculating, but every layer of deconvolution parameter Different (here including above-mentioned weight W and biasing b), final operation result is the face feature vector of the positive sample pair.
Step 202, the face feature vector is calculated using dual signal loss function, obtains training loss, its In, the dual signal loss function is the weighted sum of Classification Loss function and certification loss function;
Wherein, dual signal loss function is defined as the weighted sum of certification network and sorter network output loss, i.e.,
Classification Loss function l1:By to model supervised training can prototype network have preferable classifying quality with And generalization.
Certification loss function l2:By to model supervised training can the face sample distance of same person diminish, And then strengthen the ability of face authentication.
And (i.e. Classification Loss and certification loss) is lost to balance above two, currently preferred weight λ= 0.005, so can effectively solve the problems, such as due to algorithm caused by loss do not restrain or restrain it is slower.
Wherein, xi,xjFor the characteristic vector of two width facial images of same person.
So, by adjusting weight λ=0.005 balanced between two kinds of losses, can effectively solve because loss is led The algorithm of cause is not restrained or restrained the problem of slower.
Wherein, the face feature vector is calculated using dual signal loss function, when obtaining training loss, root According to one embodiment of the present of invention, can be realized by following sub-step:
Wherein, when calculating training loss, in order to which there is training pattern preferably identification classification capacity (to train mould simultaneously Type recognizes the face that the image is user A) and authentication capability (i.e. training pattern can authentication image 1, image 2 whether be same The face of one people), it will use S21, S22 and S23, S24 that the instruction of classification capacity and authentication capability is identified respectively below Practice.
Sub-step S21, calculate the multiple regression probable value of the face feature vector of the training sample pair;
Wherein, the training sample is above-mentioned positive sample pair, then, can be to positive sample pair when training recognition capability The face feature vector for any one the face sample being calculated by step 201 according to multiple regression definition (formula 19) Calculate multiple regression probable value.
Sub-step S22, calculated using multiple regression probable value described in the Classification Loss function pair, obtain classification damage Lose;
Wherein it is possible to Classification Loss is calculated using formula 18.
It should be noted that the present invention is not limited for execution sequence between sub-step S21 and sub-step S23.
Sub-step S23, calculate the Euclidean distance of the face feature vector of the training sample pair;
Wherein it is possible to calculate the Euclidean distance of positive sample pair using formula 20, it so can be obtained by model and be calculated The distance between two samples.
Euclid(xi,xj)=| | xi-xj||2 (20)
Wherein, xi,xjFor two width facial image features of same person, θ is the parameter of multiple regression.
Sub-step S24, the Euclidean distance is calculated using the certification loss function, obtain certification loss;
Wherein it is possible to certification loss is obtained using formula 18.
Sub-step S25, the Classification Loss function according to the dual signal loss function and the certification loss function Between weight, calculate the Classification Loss and the certification loss weighted sum, obtain it is described training loss.
Wherein it is possible to calculated according to formula 16~18 and weight λ=0.005
Step 203, the deconvolution parameter of the convolutional neural networks model is adjusted according to the training loss, makes the training Loss convergence, obtain the convolutional neural networks model by the Classification Loss and certification loss supervised training.
Wherein it is possible to according to the training lose adjust each layer in convolutional neural networks model deconvolution parameter (including Each layer of weight W and biasing are b).
Table 1 shows the parameter of the convolutional neural networks model by above-mentioned Classification Loss and certification loss supervised training:
Table 1
Alternatively, on the basis of above-described embodiment, refreshing to above-mentioned convolution using the Classification Loss and certification loss Exercised supervision through network model before training, it is necessary to construct human face data collection, and concentrated from human face data and construct above-mentioned positive sample This is right.
According to one embodiment of present invention, when constructing the human face data collection, data cleansing can be included, data expand Fill two steps.
It may include steps of for data cleansing step:
Step 1:Sub-step S31, judge that the face sample in face database whether there is face marking error;Sub-step Rapid S32, change the face mark for the face sample that face marking error in the face database be present;
Wherein, because the face database of structure is to be merged to form by multiple face databases, thus there is same The personal mark in disparate databases is different, then after merging, will exist should use the more of same mark What individual face sample used is different marks, for representing two different users.Therefore, just need the present invention real here The step of applying one judges that the human face data of same user whether there is data marking error, then if it find that after merging Face database in the face sample of face marking error be present, it is possible to it is modified, so that same user Face sample uses unified mark.
Step 2:Sub-step S33, calculate the similarity for the face sample for belonging to different people in the face database;Son Step S34, if the similarity is more than predetermined threshold value, manually verified, then receiving user to the similarity During confirmation, the face sample that different people is belonged in the face database is merged into the face sample for belonging to same people;
Likewise, the difference such as the expression due to face, illumination, posture is likely present in the face database of structure and is made Different users is classified as into by two class face samples.Therefore, one is just needed the step of the embodiment of the present invention here to judge to belong to Whether the human face data of different user is closely similar, then if it find that closely similar human face data then needs to merge into belong to The face sample of same people.
Wherein it is possible to measure the similarity between different people using grader, it is more than 80% not for average similarity With the data of people, it can be determined that the two is same user, then merges the two data;
Wherein, the i-th class and the average similarity S of jth class face sampleijDefinition is as shown in formula (21).
Wherein, cosine (x, y) is x, the COS distance between y,WithFor the face sample set of the i-th class and jth class Close,WithRespectively two classes include the number of face sample.
Data extending can be carried out for the face sample data after above-mentioned data cleansing, so as to use face number During according to the positive sample of concentration to carrying out model training, Model Identification and certification can be made to belong to the more multisample of same user.
Data extending step can include following sub-step:
Sub-step S35, angle rotation is carried out to the face sample for belonging to same person in the face database;
Wherein it is possible to face is rotated into ± 5 on human face data collection after cleaning, ± 10, ± 15 and ± 20 degree.
Sub-step S36, by carrying out similar change by the postrotational face sample of the angle, being amplified Or the multiple face samples reduced;
Wherein it is possible to which postrotational facial image is zoomed in or out into 2 times and 4 times etc. respectively by similar change, obtain Belong to multiple face samples of same user.
Sub-step S37, the multiple face sample is supplemented to the human face data collection of the face sample of the same person With the human face data collection constructed.
Alternatively, on the basis of above-described embodiment, because each user includes a sample set, such as set bag Include 10 samples, then if the building method of the positive sample pair according to prior art, need to construct 210 positive samples pair, This obviously can increase the operand of training pattern, reduce training speed.Therefore, according to one embodiment of present invention, can be right Human face data collection after by above-mentioned cleaning and expanding step constructs positive sample pair, for the instruction of convolutional neural networks model Practice.
, can be first when constructing the positive sample pair to the human face data collection in one embodiment of the application Human face data after determining the cleaning and expanding concentrates the multiple face samples for belonging to same person;By the multiple face sample It is arranged in ring structure;It is successively a positive sample pair by two face sample architectures adjacent in the ring structure.
Specifically, such as to the human face data after cleaning and expansion the sample set of a certain classification is concentrated to include 5 people Face sample, respectively f1,f2,…,f5, then to the positive sample based on above-mentioned winding method of the category to constructing such as Fig. 3 institutes Show, two face samples adjacent in ring structure can be sequentially connected to form a positive sample pair, so form positive sample It is right:(f1,f2),(f2,f3),(f3,f4),(f4,f5) and (f5,f1).In this manner it is possible to established between making the sample of same person Direct or indirect contact, so as to ensure the correlation in class between sample, also, can also reduce training pattern operand, Improve model training speed.
By means of the technical scheme of the above embodiment of the present invention, the embodiment of the present invention is supervised by certification loss and Classification Loss Depth convolutional network extraction facial image feature is superintended and directed, the feature to the change robust such as illumination, expression, posture can not only be obtained, But also by Classification Loss function supervised training, network can be caused to have preferable classification capacity;Using positive sample to (same The face sample of people to) Euclidean loss function supervised training, the characteristic distance of the face sample of same person can be caused to fill That divides is close;Then, the synteny and redundancy of feature are eliminated by principal component analytical method, Bayes enters finally by joint The identification of one step Enhanced feature, classification is completed, avoid the undesirable caused algorithm of selection negative sample and do not restrain, and protecting Calculating speed is improved while the degree of accuracy for having demonstrate,proved face authentication.
It should be noted that for embodiment of the method, in order to be briefly described, therefore it is all expressed as to a series of action group Close, but those skilled in the art should know, the embodiment of the present invention is not limited by described sequence of movement, because According to the embodiment of the present invention, some steps can use other orders or carry out simultaneously.Secondly, those skilled in the art also should This knows that embodiment described in this description belongs to preferred embodiment, and the involved action not necessarily present invention is real Necessary to applying example.
It is corresponding with the method that the embodiments of the present invention are provided, reference picture 4, show a kind of face authentication of the present invention The structured flowchart of the embodiment of device 400, it can specifically include following module:
Extraction module 401, for using the convolutional neural networks mould for losing supervised training by Classification Loss and certification in advance Type extracts feature to multiple facial images to be measured, obtains multiple face feature vectors of the multiple facial image to be measured;
Dimensionality reduction module 402, for carrying out dimensionality reduction to the multiple face feature vector using principal component analysis device, obtain Multiple face feature vectors after dimensionality reduction;
Sort module 403, for being entered using joint Bayes classifier to the multiple face feature vector after dimensionality reduction Row classified calculating, determine the multiple facial image to be measured whether be same people facial image.
Reference picture 5, in the optional implementation of the present invention, on the basis of Fig. 4, described device 400 also includes:
Computing module 404, for inputting to the training sample of the convolutional neural networks model to carrying out computing, obtaining The face feature vector of the training sample pair, wherein, the training sample is to for positive sample pair, and selected from including face mark Human face data collection;
Computing module 405, for being calculated using dual signal loss function the face feature vector, trained Loss, wherein, the dual signal loss function is the weighted sum of Classification Loss function and certification loss function;
Adjusting module 406, for adjusting the deconvolution parameter of the convolutional neural networks model according to the training loss, make The training loss convergence, obtain the convolutional neural networks model by the Classification Loss and certification loss supervised training.
Alternatively, the calculating sub module 405 includes following unshowned submodule:
Calculate multiple regression submodule, the multiple regression probability of the face feature vector for calculating the training sample pair Value;
Classification Loss submodule is calculated, based on being carried out using multiple regression probable value described in the Classification Loss function pair Calculate, obtain Classification Loss;
Calculate Euclidean distance submodule, the Euclidean distance of the face feature vector for calculating the training sample pair;
Certification loss submodule is calculated, for being calculated using the certification loss function the Euclidean distance, is obtained Lost to certification;
Training loss submodule is calculated, for the Classification Loss function according to the dual signal loss function and described Weight between certification loss function, the weighted sum of the Classification Loss and certification loss is calculated, obtain the training damage Lose.
Alternatively, the computing module 404 includes following unshowned submodule:
Convolutional calculation submodule, for inputting the positive sample to the convolutional neural networks model to carrying out respectively Convolutional calculation, obtain convolution results;
Submodule is activated, for entering line activating respectively to the convolution results of the positive sample pair, after being activated Convolution results;
Down-sampling submodule, for carrying out down-sampling to the convolution results after activation to the positive sample, obtain institute State the face feature vector of positive sample pair.
Alternatively, described device 400 also includes following unshowned module:
Face sample module is determined, for determining that the human face data concentration belongs to multiple face samples of same person;
Arrange module, for by the multiple face sample permutations into ring structure;
Constructing module, for by two face sample architectures adjacent in the ring structure being successively a positive sample pair, Obtain the positive sample pair of the human face data collection.
Alternatively, described device 400 also includes following unshowned module:
Similarity module is calculated, for calculating the similarity for the face sample for belonging to different people in the face database;
Merging module, if being more than predetermined threshold value for the similarity and receiving confirmation of the user to the similarity Information, then the face sample for belonging to different people in the face database is merged into the face sample for belonging to same people;
Angle rotary module, for carrying out angle rotation to the face sample for belonging to same person in the face database Turn;
Similar change module, for by carrying out similar change by the postrotational face sample of the angle, The multiple face samples for being amplified or reducing;
Complementary module, the human face data of the face sample for the multiple face sample to be supplemented to the same person Collect the human face data collection to be constructed.
Alternatively, described device 400 also includes module is not shown as follows:
Face detection module, for carrying out Face datection respectively to multiple testing images, determine the multiple testing image Human face region;
Positioning feature point module, for carrying out positioning feature point respectively to multiple human face regions, it is determined that each face Multiple anchor points of feature;
Centralized positioning point module is determined, for determining each eyes in multiple anchor points of each face characteristic Centre spot;
Alignment module, it is default for the centre spot of each eyes described in the human face region to be aligned to respectively Eyes coordinates position;
Similarity transformation module, for carrying out similarity transformation to the human face region after the registration process;
Module is normalized, for the human face region after the similarity transformation to be normalized;
Modular converter, institute is formed for the human face region after the normalized to be converted into gray level image State facial image to be measured.
By means of the technical scheme of the above embodiment of the present invention, the embodiment of the present invention passes through Classification Loss and certification loss prison Depth convolutional network extraction facial image feature is superintended and directed, the feature to the change robust such as illumination, expression, posture can not only be obtained, But also by Classification Loss function supervised training, network can be caused to have preferable classification capacity;Using positive sample to (same The face sample of people to) Euclidean loss function supervised training, the characteristic distance of the face sample of same person can be caused to fill That divides is close;Then, the synteny and redundancy of feature are eliminated by principal component analytical method, Bayes enters finally by joint The identification of one step Enhanced feature, classification is completed, avoid the undesirable caused algorithm of selection negative sample and do not restrain, and protecting Calculating speed is improved while the degree of accuracy for having demonstrate,proved face authentication.
For device embodiment, because it is substantially similar to embodiment of the method, so fairly simple, the phase of description Part is closed referring to the part of embodiment of the method to illustrate.
Each embodiment in this specification is described by the way of progressive, what each embodiment stressed be with The difference of other embodiment, between each embodiment identical similar part mutually referring to.
It should be understood by those skilled in the art that, the embodiment of the embodiment of the present invention can be provided as method, apparatus or calculate Machine program product.Therefore, the embodiment of the present invention can use complete hardware embodiment, complete software embodiment or combine software and The form of the embodiment of hardware aspect.Moreover, the embodiment of the present invention can use one or more wherein include computer can With in the computer-usable storage medium (including but is not limited to magnetic disk storage, CD-ROM, optical memory etc.) of program code The form of the computer program product of implementation.
The embodiment of the present invention is with reference to method according to embodiments of the present invention, terminal device (system) and computer program The flow chart and/or block diagram of product describes.It should be understood that can be by computer program instructions implementation process figure and/or block diagram In each flow and/or square frame and the flow in flow chart and/or block diagram and/or the combination of square frame.These can be provided Computer program instructions are set to all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing terminals Standby processor is to produce a machine so that is held by the processor of computer or other programmable data processing terminal equipments Capable instruction is produced for realizing in one flow of flow chart or multiple flows and/or one square frame of block diagram or multiple square frames The device for the function of specifying.
These computer program instructions, which may be alternatively stored in, can guide computer or other programmable data processing terminal equipments In the computer-readable memory to work in a specific way so that the instruction being stored in the computer-readable memory produces bag The manufacture of command device is included, the command device is realized in one flow of flow chart or multiple flows and/or one side of block diagram The function of being specified in frame or multiple square frames.
These computer program instructions can be also loaded into computer or other programmable data processing terminal equipments so that Series of operation steps is performed on computer or other programmable terminal equipments to produce computer implemented processing, so that The instruction performed on computer or other programmable terminal equipments is provided for realizing in one flow of flow chart or multiple flows And/or specified in one square frame of block diagram or multiple square frames function the step of.
Although having been described for the preferred embodiment of the embodiment of the present invention, those skilled in the art once know Basic creative concept, then other change and modification can be made to these embodiments.So appended claims are intended to explain To include preferred embodiment and fall into being had altered and changing for range of embodiment of the invention.
Finally, it is to be noted that, herein, such as first and second or the like relational terms be used merely to by One entity or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or operation Between any this actual relation or order be present.Moreover, term " comprising ", "comprising" or its any other variant meaning Covering including for nonexcludability, so that process, method, article or terminal device including a series of elements are not only wrapped Those key elements, but also the other element including being not expressly set out are included, or is also included for this process, method, article Or the key element that terminal device is intrinsic.In the absence of more restrictions, wanted by what sentence "including a ..." limited Element, it is not excluded that other identical element in the process including the key element, method, article or terminal device also be present.
Above to a kind of face authentication method provided by the present invention and a kind of face authentication device, detailed Jie has been carried out Continue, specific case used herein is set forth to the principle and embodiment of the present invention, and the explanation of above example is only It is the method and its core concept for being used to help understand the present invention;Meanwhile for those of ordinary skill in the art, according to this hair Bright thought, there will be changes in specific embodiments and applications, in summary, this specification content should not manage Solve as limitation of the present invention.

Claims (12)

1. a kind of face authentication method, it is characterised in that methods described includes:
Using in advance by the convolutional neural networks model of Classification Loss and certification loss supervised training to multiple facial images to be measured Feature is extracted, obtains multiple face feature vectors of the multiple facial image to be measured;
Dimensionality reduction is carried out to the multiple face feature vector using principal component analytical method, obtains multiple face characteristics after dimensionality reduction Vector;
Classified calculating is carried out to the multiple face feature vector after dimensionality reduction using joint Bayes classifier, determined described more Individual facial image to be measured whether be same people facial image.
2. according to the method for claim 1, it is characterised in that described using in advance by Classification Loss and certification loss supervision The convolutional neural networks model of training extracts feature to multiple facial images to be measured, obtains the more of the multiple facial image to be measured Before the step of individual face feature vector, methods described also includes:
To inputting to the training sample of the convolutional neural networks model to carrying out computing, the face of the training sample pair is obtained Characteristic vector, wherein, the training sample is to for positive sample pair, and selected from the human face data collection for including face mark;
The face feature vector is calculated using dual signal loss function, obtains training loss, wherein, the dual signal Loss function is the weighted sum of Classification Loss function and certification loss function;
The deconvolution parameter of the convolutional neural networks model is adjusted according to the training loss, makes the training loss convergence, obtains To the convolutional neural networks model that supervised training is lost by the Classification Loss and certification.
3. according to the method for claim 2, it is characterised in that described to use dual signal loss function to the face characteristic Vector is calculated, and obtains the step of training is lost, including:
Calculate the multiple regression probable value of the face feature vector of the training sample pair;
Calculated using multiple regression probable value described in the Classification Loss function pair, obtain Classification Loss;
Calculate the Euclidean distance of the face feature vector of the training sample pair;
The Euclidean distance is calculated using the certification loss function, obtains certification loss;
Weight according to the dual signal loss function between Classification Loss function and the certification loss function, calculate The Classification Loss and the weighted sum of certification loss, obtain the training loss.
4. according to the method for claim 2, it is characterised in that the instruction of described pair of input to the convolutional neural networks model Practice sample to carrying out computing, before the step of obtaining the face feature vector of the training sample pair, methods described also includes:
Determine that the human face data concentrates the multiple face samples for belonging to same person;
By the multiple face sample permutations into ring structure;
It is successively a positive sample pair by two face sample architectures adjacent in the ring structure, obtains the human face data collection Positive sample pair.
5. according to the method for claim 2, it is characterised in that the instruction of described pair of input to the convolutional neural networks model Practice sample to carrying out computing, before the step of obtaining the face feature vector of the training sample pair, methods described also includes:
Calculate the similarity for the face sample for belonging to different people in face database;
If the similarity is more than predetermined threshold value and receives confirmation of the user to the similarity, by the face The face sample for belonging to different people in database merges into the face sample for belonging to same people;
Angle rotation is carried out to the face sample for belonging to same person in the face database;
By to carrying out similar change by the postrotational face sample of the angle, being amplified or reducing more personal Face sample;
The multiple face sample is supplemented to people of the human face data collection of the face sample of the same person to be constructed Face data set.
6. according to the method for claim 1, it is characterised in that described using in advance by Classification Loss and certification loss supervision The convolutional neural networks model of training extracts feature to multiple facial images to be measured, obtains the more of the multiple facial image to be measured Before the step of individual face feature vector, methods described also includes:
Face datection is carried out respectively to multiple testing images, determines the human face region of the multiple testing image;
Positioning feature point is carried out respectively to multiple human face regions, it is determined that multiple anchor points of each face characteristic;
The centre spot of each eyes is determined in multiple anchor points of each face characteristic;
The centre spot of each eyes described in the human face region is aligned to default eyes coordinates position respectively;
Similarity transformation is carried out to the human face region after the registration process;
The human face region after the similarity transformation is normalized;
The human face region after the normalized is converted into gray level image and forms the facial image to be measured.
7. a kind of face authentication device, it is characterised in that described device includes:
Extraction module, for using in advance by the convolutional neural networks model of Classification Loss and certification loss supervised training to multiple Facial image to be measured extracts feature, obtains multiple face feature vectors of the multiple facial image to be measured;
Dimensionality reduction module, for carrying out dimensionality reduction to the multiple face feature vector using principal component analysis device, after obtaining dimensionality reduction Multiple face feature vectors;
Sort module, based on carrying out classification to the multiple face feature vector after dimensionality reduction using joint Bayes classifier Calculate, determine the multiple facial image to be measured whether be same people facial image.
8. device according to claim 7, it is characterised in that described device also includes:
Computing module, for inputting to the training sample of the convolutional neural networks model to carrying out computing, obtaining the instruction Practice the face feature vector of sample pair, wherein, the training sample is to for positive sample pair, and selected from the face for including face mark Data set;
Computing module, for being calculated using dual signal loss function the face feature vector, training loss is obtained, its In, the dual signal loss function is the weighted sum of Classification Loss function and certification loss function;
Adjusting module, for adjusting the deconvolution parameter of the convolutional neural networks model according to the training loss, make the instruction Practice loss convergence, obtain the convolutional neural networks model by the Classification Loss and certification loss supervised training.
9. device according to claim 8, it is characterised in that the computing module includes:
Calculate multiple regression submodule, the multiple regression probable value of the face feature vector for calculating the training sample pair;
Classification Loss submodule is calculated, for being calculated using multiple regression probable value described in the Classification Loss function pair, Obtain Classification Loss;
Calculate Euclidean distance submodule, the Euclidean distance of the face feature vector for calculating the training sample pair;
Certification loss submodule is calculated, for being calculated using the certification loss function the Euclidean distance, is recognized Card loss;
Training loss submodule is calculated, for Classification Loss function and the certification according to the dual signal loss function Weight between loss function, the weighted sum of the Classification Loss and certification loss is calculated, obtain the training loss.
10. device according to claim 8, it is characterised in that described device also includes:
Face sample module is determined, for determining that the human face data concentration belongs to multiple face samples of same person;
Arrange module, for by the multiple face sample permutations into ring structure;
Constructing module, for by two face sample architectures adjacent in the ring structure being successively a positive sample pair, obtain The positive sample pair of the human face data collection.
11. device according to claim 8, it is characterised in that described device also includes:
Similarity module is calculated, for calculating the similarity for the face sample for belonging to different people in the face database;
Merging module, if being more than predetermined threshold value for the similarity and receiving confirmation of the user to the similarity, The face sample for belonging to different people in the face database is then merged into the face sample for belonging to same people;
Angle rotary module, for carrying out angle rotation to the face sample for belonging to same person in the face database;
Similar change module, for by carrying out similar change by the postrotational face sample of the angle, obtaining The multiple face samples zoomed in or out;
Complementary module, for the multiple face sample is supplemented to the same person face sample human face data collection with The human face data collection constructed.
12. device according to claim 7, it is characterised in that described device also includes:
Face detection module, for carrying out Face datection respectively to multiple testing images, determine the people of the multiple testing image Face region;
Positioning feature point module, for carrying out positioning feature point respectively to multiple human face regions, it is determined that each face characteristic Multiple anchor points;
Centralized positioning point module is determined, for determining the center of each eyes in multiple anchor points of each face characteristic Anchor point;
Alignment module, for the centre spot of each eyes described in the human face region to be aligned into default eyes respectively Coordinate position;
Similarity transformation module, for carrying out similarity transformation to the human face region after the registration process;
Module is normalized, for the human face region after the similarity transformation to be normalized;
Modular converter, described treat is formed for the human face region after the normalized to be converted into gray level image Survey facial image.
CN201610852196.6A 2016-09-26 2016-09-26 Face authentication method and device Pending CN107871107A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610852196.6A CN107871107A (en) 2016-09-26 2016-09-26 Face authentication method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610852196.6A CN107871107A (en) 2016-09-26 2016-09-26 Face authentication method and device

Publications (1)

Publication Number Publication Date
CN107871107A true CN107871107A (en) 2018-04-03

Family

ID=61751891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610852196.6A Pending CN107871107A (en) 2016-09-26 2016-09-26 Face authentication method and device

Country Status (1)

Country Link
CN (1) CN107871107A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109872342A (en) * 2019-02-01 2019-06-11 北京清帆科技有限公司 A kind of method for tracking target under special scenes
CN110414347A (en) * 2019-06-26 2019-11-05 北京迈格威科技有限公司 Face verification method, apparatus, equipment and storage medium
WO2019227479A1 (en) * 2018-06-01 2019-12-05 华为技术有限公司 Method and apparatus for generating face rotation image
CN110889435A (en) * 2019-11-04 2020-03-17 国网河北省电力有限公司检修分公司 Insulator evaluation classification method and device based on infrared image
CN110942081A (en) * 2018-09-25 2020-03-31 北京嘀嘀无限科技发展有限公司 Image processing method and device, electronic equipment and readable storage medium
CN111177469A (en) * 2019-12-20 2020-05-19 国久大数据有限公司 Face retrieval method and face retrieval device
CN111615704A (en) * 2018-10-16 2020-09-01 华为技术有限公司 Object identification method and terminal equipment
CN111814697A (en) * 2020-07-13 2020-10-23 伊沃人工智能技术(江苏)有限公司 Real-time face recognition method and system and electronic equipment
CN112597872A (en) * 2020-12-18 2021-04-02 深圳地平线机器人科技有限公司 Gaze angle estimation method and device, storage medium, and electronic device
CN112613488A (en) * 2021-01-07 2021-04-06 上海明略人工智能(集团)有限公司 Face recognition method and device, storage medium and electronic equipment

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550658A (en) * 2015-12-24 2016-05-04 蔡叶荷 Face comparison method based on high-dimensional LBP (Local Binary Patterns) and convolutional neural network feature fusion

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550658A (en) * 2015-12-24 2016-05-04 蔡叶荷 Face comparison method based on high-dimensional LBP (Local Binary Patterns) and convolutional neural network feature fusion

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YI SUN ET AL.: ""Deep Learning Face Representation by Joint Identification-Verification"", 《ARXIV:1406.4773V1》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11232286B2 (en) 2018-06-01 2022-01-25 Huawei Technologies Co., Ltd. Method and apparatus for generating face rotation image
WO2019227479A1 (en) * 2018-06-01 2019-12-05 华为技术有限公司 Method and apparatus for generating face rotation image
CN110942081A (en) * 2018-09-25 2020-03-31 北京嘀嘀无限科技发展有限公司 Image processing method and device, electronic equipment and readable storage medium
CN110942081B (en) * 2018-09-25 2023-08-18 北京嘀嘀无限科技发展有限公司 Image processing method, device, electronic equipment and readable storage medium
CN111615704A (en) * 2018-10-16 2020-09-01 华为技术有限公司 Object identification method and terminal equipment
CN109872342A (en) * 2019-02-01 2019-06-11 北京清帆科技有限公司 A kind of method for tracking target under special scenes
CN110414347A (en) * 2019-06-26 2019-11-05 北京迈格威科技有限公司 Face verification method, apparatus, equipment and storage medium
CN110889435A (en) * 2019-11-04 2020-03-17 国网河北省电力有限公司检修分公司 Insulator evaluation classification method and device based on infrared image
CN111177469A (en) * 2019-12-20 2020-05-19 国久大数据有限公司 Face retrieval method and face retrieval device
CN111814697A (en) * 2020-07-13 2020-10-23 伊沃人工智能技术(江苏)有限公司 Real-time face recognition method and system and electronic equipment
CN111814697B (en) * 2020-07-13 2024-02-13 伊沃人工智能技术(江苏)有限公司 Real-time face recognition method and system and electronic equipment
CN112597872A (en) * 2020-12-18 2021-04-02 深圳地平线机器人科技有限公司 Gaze angle estimation method and device, storage medium, and electronic device
CN112613488A (en) * 2021-01-07 2021-04-06 上海明略人工智能(集团)有限公司 Face recognition method and device, storage medium and electronic equipment
CN112613488B (en) * 2021-01-07 2024-04-05 上海明略人工智能(集团)有限公司 Face recognition method and device, storage medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN107871107A (en) Face authentication method and device
Kim et al. Efficient facial expression recognition algorithm based on hierarchical deep neural network structure
CN107194341B (en) Face recognition method and system based on fusion of Maxout multi-convolution neural network
Boughrara et al. Facial expression recognition based on a mlp neural network using constructive training algorithm
CN109815801A (en) Face identification method and device based on deep learning
Ali et al. Boosted NNE collections for multicultural facial expression recognition
CN110532920A (en) Smallest number data set face identification method based on FaceNet method
CN105868716B (en) A kind of face identification method based on facial geometric feature
CN107871100A (en) The training method and device of faceform, face authentication method and device
CN107463920A (en) A kind of face identification method for eliminating partial occlusion thing and influenceing
CN105550657B (en) Improvement SIFT face feature extraction method based on key point
CN109033938A (en) A kind of face identification method based on ga s safety degree Fusion Features
CN109426781A (en) Construction method, face identification method, device and the equipment of face recognition database
CN109086660A (en) Training method, equipment and the storage medium of multi-task learning depth network
CN106295694A (en) A kind of face identification method of iteration weight set of constraints rarefaction representation classification
CN109033953A (en) Training method, equipment and the storage medium of multi-task learning depth network
CN112464865A (en) Facial expression recognition method based on pixel and geometric mixed features
CN107871105A (en) Face authentication method and device
CN111126482A (en) Remote sensing image automatic classification method based on multi-classifier cascade model
Jang et al. Face detection using quantum-inspired evolutionary algorithm
CN109886153A (en) A kind of real-time face detection method based on depth convolutional neural networks
CN109101869A (en) Test method, equipment and the storage medium of multi-task learning depth network
CN107818299A (en) Face recognition algorithms based on fusion HOG features and depth belief network
CN113011243A (en) Facial expression analysis method based on capsule network
CN105608443B (en) A kind of face identification method of multiple features description and local decision weighting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180403