CN105138973B - The method and apparatus of face authentication - Google Patents

The method and apparatus of face authentication Download PDF

Info

Publication number
CN105138973B
CN105138973B CN201510490244.7A CN201510490244A CN105138973B CN 105138973 B CN105138973 B CN 105138973B CN 201510490244 A CN201510490244 A CN 201510490244A CN 105138973 B CN105138973 B CN 105138973B
Authority
CN
China
Prior art keywords
feature vector
facial image
vector
mapping matrix
linear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510490244.7A
Other languages
Chinese (zh)
Other versions
CN105138973A (en
Inventor
郇淑雯
毛秀萍
张伟琳
朱和贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eyes Intelligent Technology Co ltd
Beijing Eyecool Technology Co Ltd
Original Assignee
Beijing Techshino Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Techshino Technology Co Ltd filed Critical Beijing Techshino Technology Co Ltd
Priority to CN201510490244.7A priority Critical patent/CN105138973B/en
Publication of CN105138973A publication Critical patent/CN105138973A/en
Application granted granted Critical
Publication of CN105138973B publication Critical patent/CN105138973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Abstract

The invention discloses a kind of method and apparatus of face authentication, belong to field of biological recognition, the method includes:Extract the feature vector of multiple levels successively using the multi-layer depth convolutional network for first passing through the training of multistratum classification network association in advance to facial image to be certified and facial image template;The feature vector of multiple levels is passed sequentially through into unified dimensional Linear Mapping matrix and is mapped as unified dimensional feature vector;Unified dimensional feature vector is connected into union feature vector;Union feature vector is carried out dimensionality reduction by linear dimensionality reduction mapping matrix to map to obtain multi-feature vector;By linear discriminant analysis, cosine value is normalized using absolute value, certification is compared to the multi-feature vector of the obtained multi-feature vector of facial image to be certified and facial image template.Compared with prior art, the method for face authentication of the invention is anti-interference can be strong, and scalability is good, and certification accuracy rate is high.

Description

The method and apparatus of face authentication
Technical field
The present invention relates to field of biological recognition, particularly relate to a kind of method and apparatus of face authentication.
Background technology
Face authentication is a kind of form of bio-identification, by effectively characterizing face, obtains the spy of two width face pictures Sign, using sorting algorithm come whether judge this two photos be same person.Generally it is previously stored in face identification device One width facial image, as facial image template;In certification, one width facial image of shooting is carried as facial image to be certified The feature for taking two images, using sorting algorithm come whether judge this two photos be same person.
Extraction feature method be:Engineer goes out a feature vector, by various algorithms take out as defined in feature to Amount, the face authentication method such as based on geometric properties, the face authentication method based on subspace, the face based on signal processing are recognized Card method etc., but this method is easy to influence result by factors such as illumination, expressions, poor anti jamming capability, and The feature vector that engineer goes out mostly is the poor expandability based in the case of specific.
Recognition of face based on depth network and authentication techniques can learn and extract feature automatically, but general depth Network has a gradient disperse, and to the processing of each hierarchy characteristic and understands insufficient, is not enough to merely with high-level characteristic Fully describe image.
Invention content
The present invention provides a kind of method and apparatus of face authentication, and this method strong antijamming capability, scalability is good, certification Accuracy rate is high.
In order to solve the above technical problems, present invention offer technical solution is as follows:
A kind of method of face authentication, including:
The multilayer for first passing through the training of multistratum classification network association in advance is used to facial image to be certified and facial image template Grade depth convolutional network extracts the feature vector of multiple levels successively;
By the feature vector of multiple levels pass sequentially through unified dimensional Linear Mapping matrix be mapped as unified dimensional feature to Amount;
Unified dimensional feature vector is connected into union feature vector;
Union feature vector is carried out dimensionality reduction by linear dimensionality reduction mapping matrix to map to obtain multi-feature vector;
By linear discriminant analysis, cosine value is normalized using absolute value, to the synthesis of obtained facial image to be certified Certification is compared in the multi-feature vector of feature vector and facial image template.
A kind of device of face authentication, including:
First extraction module, for facial image to be certified and facial image template using first passing through multistratum classification net in advance The multi-layer depth convolutional network of network joint training extracts the feature vector of multiple levels successively;
First mapping block, for the feature vector of multiple levels to be passed sequentially through the mapping of unified dimensional Linear Mapping matrix For unified dimensional feature vector;
First serial module structure, for unified dimensional feature vector to be connected into union feature vector;
Second mapping block, it is comprehensive for mapping to obtain union feature vector by linear dimensionality reduction mapping matrix progress dimensionality reduction Close feature vector;
First comparing module, for by linear discriminant analysis, normalizing cosine value using absolute value, waiting recognizing to what is obtained Certification is compared in the multi-feature vector of witness's face image and the multi-feature vector of facial image template.
The invention has the advantages that:
In the method for the face authentication of the present invention, first using the multi-layer for first passing through the training of multistratum classification network association in advance Depth convolutional network extracts the feature vector of multiple levels of facial image and facial image template to be certified, then by multiple layers The feature vector of grade passes sequentially through unified dimensional Linear Mapping matrix and is mapped as unified dimensional feature vector, then unified dimensional is special Sign vector is connected into union feature vector, and union feature vector is carried out dimensionality reduction by linear dimensionality reduction mapping matrix and maps to obtain Multi-feature vector normalizes cosine value, to obtained face figure to be certified finally by linear discriminant analysis using absolute value Certification is compared in the multi-feature vector of picture and the multi-feature vector of facial image template.
Compared with prior art, the present invention learns and extracts feature automatically by multi-layer depth convolutional network, and existing Engineer goes out a feature vector and compares in technology, and anti-interference energy is strong, and scalability is good, and certification accuracy rate is high.
The multi-layer depth convolutional network of the present invention carries out joint training by multistratum classification network and obtains, and avoids gradient Disperse problem, certification accuracy rate are high.
And the feature vector of multiple levels is merged, increases characteristics of image richness, compensates for general depth network Defect insufficient, that description image is not sufficient enough to merely with high-level characteristic is handled to each hierarchy characteristic;It further improves and recognizes Demonstrate,prove accuracy rate.
Inventor also found that it is long to have ignored vector field homoemorphism for traditional comparison authentication method, especially cosine similarity method Difference, to make a difference, description is not comprehensive, reduces the accuracy rate for comparing certification;The present invention uses linear discriminant analysis, right Multiple difference characteristics including absolute value normalization cosine value are compared, and it is accurate further to improve certification Rate.
Therefore the method strong antijamming capability of the face authentication of the present invention, scalability is good, and certification accuracy rate is high, and avoids Gradient disperse problem makes up the defect that description image is not sufficient enough to using high-level characteristic.
Description of the drawings
Fig. 1 is the flow chart of the method for the face authentication of the present invention;
Fig. 2 is the schematic diagram of the device of the face authentication of the present invention;
Fig. 3 is the schematic diagram of image preprocessing in the present invention;
Fig. 4 is the schematic diagram being trained to multi-layer depth convolutional network and sorter network in the present invention;
Fig. 5 is the structural schematic diagram of the basic convolutional network in the present invention;
Fig. 6 is the multi-layer depth convolutional network schematic diagram in the present invention;
Fig. 7 is the sorter network schematic diagram in the present invention;
Fig. 8 is the down-sampling operation chart in the present invention.
Specific implementation mode
To keep the technical problem to be solved in the present invention, technical solution and advantage clearer, below in conjunction with attached drawing and tool Body embodiment is described in detail.
On the one hand, a kind of method of face authentication of the present invention, as shown in Figure 1, including:
Step S101:Facial image to be certified and facial image template are instructed using the pre- multistratum classification network association that first passes through Experienced multi-layer depth convolutional network extracts the feature vector of multiple levels successively;
Multi-layer depth convolutional network includes 2 or more convolutional networks, each convolutional network include convolution, activation and under Sampling operation, the sequence and quantity of these operations are not fixed, are determined according to actual conditions;Each convolutional network of the present invention is equal A feature vector is extracted, fea can be denoted as1,fea2, fea3... (feature vector of 1 group of multiple level is only listed here, The feature vector of multiple levels of facial image i.e. to be certified or facial image template, following formula also only write out an image Formula), the input of first convolutional network is facial image to be certified or facial image template, the latter convolutional network it is defeated Enter the characteristic pattern after being operated for previous convolutional network;
General depth network has that gradient disperse, multi-layer depth convolutional network of the invention pass through multilayer point Class network carries out joint training and obtains, and avoids the above problem.
Step S102:The feature vector of multiple levels is passed sequentially through unified dimensional Linear Mapping matrix to be mapped as uniformly tieing up Spend feature vector;Unified dimensional Linear Mapping matrix is obtained by training in advance, can be denoted as W1,W2,W3..., it is unified to tie up Degree feature vector can be denoted as f1,f2,f3,…。
Step S103:Unified dimensional feature vector is connected into union feature vector;Feature can be denoted asmerge
Step S104:By union feature vector by linear dimensionality reduction mapping matrix carry out dimensionality reduction map to obtain comprehensive characteristics to Amount;Linear dimensionality reduction mapping matrix is obtained by training in advance, can be denoted as WT, multi-feature vector can be denoted as fT
Step S105:By linear discriminant analysis, cosine value is normalized using absolute value, to obtained face figure to be certified Certification is compared in the multi-feature vector of picture and the multi-feature vector of facial image template.
In the method for the face authentication of the present invention, first using the multi-layer for first passing through the training of multistratum classification network association in advance Depth convolutional network extracts the feature vector of multiple levels of facial image and facial image template to be certified, then by multiple layers The feature vector of grade passes sequentially through unified dimensional Linear Mapping matrix and is mapped as unified dimensional feature vector, then unified dimensional is special Sign vector is connected into union feature vector, and union feature vector is carried out dimensionality reduction by linear dimensionality reduction mapping matrix and maps to obtain Multi-feature vector normalizes cosine value, to obtained face figure to be certified finally by linear discriminant analysis using absolute value Certification is compared in the multi-feature vector of picture and the multi-feature vector of facial image template.
Compared with prior art, the present invention learns and extracts feature automatically by multi-layer depth convolutional network, and existing Engineer goes out a feature vector and compares in technology, and anti-interference energy is strong, and scalability is good, and certification accuracy rate is high.
The multi-layer depth convolutional network of the present invention carries out joint training by multistratum classification network and obtains, and avoids gradient Disperse problem, certification accuracy rate are high.
And the feature vector of multiple levels is merged, increases characteristics of image richness, compensates for general depth network Defect insufficient, that description image is not sufficient enough to merely with high-level characteristic is handled to each hierarchy characteristic;It further improves and recognizes Demonstrate,prove accuracy rate.
Inventor also found that it is long to have ignored vector field homoemorphism for traditional comparison authentication method, especially cosine similarity method Difference, to make a difference, description is not comprehensive, reduces the accuracy rate for comparing certification;The present invention uses linear discriminant analysis, right Multiple difference characteristics including absolute value normalization cosine value are compared, and it is accurate further to improve certification Rate.
Therefore the method strong antijamming capability of the face authentication of the present invention, scalability is good, and certification accuracy rate is high, and avoids Gradient disperse problem makes up the defect that description image is not sufficient enough to using high-level characteristic.
A kind of improvement of the method for face authentication as the present invention, step S101 further include before:
Step S100:Facial image to be certified and facial image template are pre-processed, pretreatment includes that characteristic point is fixed Position, image rectification and normalized.In fact, facial image template may cross pretreatment through prior, it can be without this Step.
The present invention uses the Face datection algorithm based on cascade Adaboost to carry out Face datection to image, then utilizes base Positioning feature point carried out to the face that detected in the facial modeling algorithm of SDM, and by image scaling, rotation with Alignment is corrected face and is normalized in translation, as shown in Figure 3.
The present invention is pre-processed using simple gray scale normalization, and the main purpose of gray scale normalization is easy for network processes company Continue data and avoid handling larger discrete grey's value, to avoid the occurrence of abnormal conditions.
The present invention, which pre-processes facial image, can facilitate subsequent verification process, and avoid extraordinary image vegetarian refreshments Influence to authentication result.
The another of the method for face authentication as the present invention improves, and each convolutional network includes convolution operation, activation Operation and down-sampling operation, the feature vector of each level are calculated as follows:
Step S1011:Convolution operation is carried out to facial image to be certified and facial image template using convolution kernel, is rolled up Product characteristic pattern, convolution operation are same convolution operations;
The present invention uses the convolution operation of same forms, carries out zero padding to input picture when operation.The volume of same forms It is identical as input picture full size that product operates obtained characteristic pattern.
Step S1012:Convolution characteristic pattern is operated into line activating using activation primitive, activation characteristic pattern is obtained, activates letter Number is ReLU activation primitives.
Step S1013:Using sampling function to activation characteristic pattern carry out down-sampling operation, obtain sampling characteristic pattern, under adopt Sample operation is that maximum value samples;
The present invention is sampled using maximum value, and maximum value is sampled using the maximum value of sampling block interior element value as the spy of sampling block Sign, in image procossing, maximum value samples the texture information that can extract image, and maintains certain of image to a certain extent Kind invariance, such as rotation, translation, scaling;In addition, tested according to statistics, for comparing average sample, maximum value sampling pair Data distribution variation is insensitive, and feature extraction is stablized relatively.
Step S1014:It repeats the above steps to obtained sampling characteristic pattern, obtains new sampling characteristic pattern, and so heavy It is multiple to execute several times;
Step S1015:Obtained all sampling characteristic patterns are subjected to vectorization, obtain the feature vector of each level, it will All sampling characteristic patterns that each step obtains form a vector.
The present invention can extract the feature vector of feature rich and stabilization, can adequately describe facial image, Increase certification accuracy rate.
Another improvement of the method for face authentication as the present invention, multi-layer depth convolutional network pass through softmax Sorter network joint training obtains, including:
When training, facial image sample database has been first had to, it is then deep using the multi-layer of initialization to facial image sample Degree convolutional network extracts the feature vector of multiple levels successively;It is the same with aforementioned step S101, is above only Verification process is training process here, and the parameters in multi-layer depth convolutional network at this time take initial value;
The feature vector of multiple levels is passed sequentially through into the unification that unified dimensional Linear Mapping matrix is mapped as same dimension Dimensional characteristics vector;
Unified dimensional feature vector is mapped respectively using Linear Mapping matrix in softmax sorter networks, is obtained To map vector;Linear Mapping matrix at this time takes initial value;
Using softmax function pairs map vector into line activating, network output valve vector is obtained;
Using the label data of network output valve vector sum facial image sample as input quantity, pass through cross entropy loss function meter Calculate network error;
Each unified dimensional feature vector is connected into a union feature vector;
Union feature vector is carried out dimensionality reduction by linear dimensionality reduction mapping matrix to map to obtain multi-feature vector;
Weight is distributed for network error, and calculates Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear dimensionality reduction and reflects Penetrate the update gradient of matrix and convolution kernel;
Utilize the update of Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel Gradient is iterated update to Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel;
Judge whether network error and iterations meet the requirements, if so, terminating, otherwise, goes to facial image sample Extract the feature vector of multiple levels successively using the multi-layer depth convolutional network of initialization.
It refers to that network error value is minimum (or small to a certain extent) that network error, which meets the requirements, at this time multi-layer depth Parameters (Linear Mapping matrix, unified dimensional Linear Mapping matrix, the linear drop of convolutional network and softmax sorter networks Tie up mapping matrix and convolution kernel) it is multi-layer depth convolutional network and softmax sorter networks after training;Iterations meet It is required that referring to that iterations reach setting value.
The present invention carries out joint training by softmax sorter networks, further avoids gradient disperse problem, and can Further to increase the flexibility ratio of e-learning by being weighted to sorter network error.
Another improvement of the method for face authentication as the present invention, step S105 include:
Step S1051:With the comprehensive spy of the multi-feature vector and facial image template of obtained facial image to be certified Sign vector is that input quantity carries out cosine similarity operation, obtains cosine similarity;
Step S1052:With the comprehensive spy of the multi-feature vector and facial image template of obtained facial image to be certified Sign vector is that input quantity carries out absolute value normalization cosine operation, obtains absolute value normalization cosine value;
Step S1053:To the comprehensive spy of the multi-feature vector and facial image template of obtained facial image to be certified Sign vector carries out modulus operation, and it is long with the second mould to obtain the first mould length;
Step S1054:Cosine similarity, absolute value normalization cosine value, the first mould length are formed one with the second mould length Four-dimensional difference vector;
Step S1055:Difference vector is mapped using difference vector mapping matrix, obtains one-dimensional vector, is divided as comparing Value;
Step S1056:Score value will be compared to be compared with threshold value is compared, threshold value, face are compared if comparing score value and being more than Certification passes through.
Inventor has found that it is poor to have ignored vector field homoemorphism length for traditional comparison authentication method, especially cosine similarity method Different, to make a difference, description is not comprehensive, reduces the accuracy rate for comparing certification;Absolute value normalizes cosine value comparison to vector The long difference of mould it is sensitive, can make up cosine similarity ignore the long difference of vector field homoemorphism and caused by difference incomplete ask is described Topic.
Therefore the present invention will compare the cosine similarity, absolute value normalization cosine value and two character modules length combinations of feature For a four-dimensional difference vector, linear discriminant analysis is carried out, certification accuracy rate is further improved.
On the other hand, the present invention provides a kind of device of face authentication, as shown in right figure 2, including:
First extraction module 11, for facial image to be certified and facial image template using first passing through multistratum classification in advance The multi-layer depth convolutional network of network association training extracts the feature vector of multiple levels successively;
First mapping block 12 is reflected for the feature vector of multiple levels to be passed sequentially through unified dimensional Linear Mapping matrix It penetrates as unified dimensional feature vector;
First serial module structure 13, for unified dimensional feature vector to be connected into union feature vector;
Second mapping block 14 maps to obtain for union feature vector to be carried out dimensionality reduction by linear dimensionality reduction mapping matrix Multi-feature vector;
First comparing module 15, for by linear discriminant analysis, normalizing cosine value using absolute value, waiting for what is obtained Certification is compared in the multi-feature vector of certification facial image and the multi-feature vector of facial image template.
The device strong antijamming capability of the face authentication of the present invention, scalability is good, and certification accuracy rate is high, and avoids Gradient disperse problem makes up the defect that description image is not sufficient enough to using high-level characteristic.
A kind of improvement of the device of face authentication as the present invention, the first extraction module further include before:
Preprocessing module, for being pre-processed to facial image to be certified and facial image template, pretreatment includes special Levy point location, image rectification and normalized.
The present invention, which pre-processes facial image, can facilitate subsequent verification process, and avoid extraordinary image vegetarian refreshments Influence to authentication result.
The another of the device of face authentication as the present invention improves, and the feature vector of each level passes through such as lower unit It is calculated:
Convolution unit is obtained for carrying out convolution operation to facial image to be certified and facial image template using convolution kernel To convolution characteristic pattern, convolution operation is same convolution operations;
Unit is activated, for being operated into line activating to convolution characteristic pattern using activation primitive, obtains activation characteristic pattern, activation Function is ReLU activation primitives;
Sampling unit, for, to activation characteristic pattern progress down-sampling operation, obtaining sampling characteristic pattern using sampling function, under Sampling operation samples for maximum value;
Cycling element obtains new sampling characteristic pattern, and so for repeating the above steps to obtained sampling characteristic pattern It repeats several times;
Primary vector unit obtains the spy of each level for obtained all sampling characteristic patterns to be carried out vectorization Sign vector.
The present invention can extract the feature vector of feature rich and stabilization, can adequately describe facial image, Increase certification accuracy rate.
Another improvement of the device of face authentication as the present invention, multi-layer depth convolutional network pass through softmax Sorter network joint training obtains, including:
Second extraction module, for being extracted successively using the multi-layer depth convolutional network of initialization to facial image sample Go out the feature vector of multiple levels;
Third mapping block, for the feature vector of multiple levels to be passed sequentially through the mapping of unified dimensional Linear Mapping matrix For the unified dimensional feature vector of same dimension;
4th mapping block, for special to unified dimensional respectively using Linear Mapping matrix in softmax sorter networks Sign vector is mapped, and map vector is obtained;
Active module, for, into line activating, obtaining network output valve vector using softmax function pairs map vector;
First computing module, for using the label data of network output valve vector sum facial image sample as input quantity, leading to It crosses cross entropy loss function and calculates network error;
Second serial module structure, for each unified dimensional feature vector to be connected into a union feature vector;
5th mapping block, it is comprehensive for mapping to obtain union feature vector by linear dimensionality reduction mapping matrix progress dimensionality reduction Close feature vector;
Second computing module, for for network error distribute weight, and calculate Linear Mapping matrix, unified dimensional linearly reflects Penetrate the update gradient of matrix, linear dimensionality reduction mapping matrix and convolution kernel;
Update module, for using Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear dimensionality reduction mapping matrix and The update gradient of convolution kernel is to Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel It is iterated update;
Judgment module, if so, terminating, otherwise, goes to for judging whether network error and iterations meet the requirements Two extraction modules.
The present invention carries out joint training by softmax sorter networks, further avoids gradient disperse problem, and can Further to increase the flexibility ratio of e-learning by being weighted to sorter network error.
Another improvement of the device of face authentication as the present invention, the first comparing module include:
First computing unit, for the multi-feature vector of facial image to be certified and facial image template to obtain Multi-feature vector is that input quantity carries out cosine similarity operation, obtains cosine similarity;
Second computing unit, for the multi-feature vector of facial image to be certified and facial image template to obtain Multi-feature vector is that input quantity carries out absolute value normalization cosine operation, obtains absolute value normalization cosine value;
Third computing unit, multi-feature vector and facial image template for facial image to be certified to obtaining Multi-feature vector carries out modulus operation, and it is long with the second mould to obtain the first mould length;
Secondary vector unit, for cosine similarity, absolute value normalization cosine value, the first mould length is long with the second mould Form a four-dimensional difference vector;
Map unit obtains one-dimensional vector, as comparison for using difference vector mapping matrix to map difference vector Score value;
Comparing unit is compared for that will compare score value with threshold value is compared, and threshold value, people are compared if comparing score value and being more than Face certification passes through.
The cosine similarity for comparing feature, absolute value normalization cosine value and two character modules length are combined as by the present invention One four-dimensional difference vector, carries out linear discriminant analysis, further improves certification accuracy rate.
With a specific embodiment, present invention is described below:
The present invention needs to be trained before certification, and specific flow is as shown in figure 4, training process is as follows:
Present invention firstly provides new convolutional networks to extract image feature vector, and the accumulation of multi-layer Fusion Features assigns power deeply Spend convolutional network (multi-layer depth convolutional network), then utilize softmax networks and learning process shown in Fig. 3 to image into Row feature learning.
Network learning procedure includes mainly the back-propagating of the forward calculation and network error of network.
(A) convolutional network forward calculation
Basic convolutional network (is not use of the present invention as shown in figure 5, noticing that Fig. 5 is the example of a convolutional network Convolutional network, convolutional network of the invention is:Convolution, activation, down-sampling ...) include convolution operation, activation operation and under Sampling operation generally also needs to carry out vectorization operation for follow-up convenience of calculation.In figure 6, each layer of convolutional network is all Indicate a basic convolutional network, the sequence and number of various operations wherein included can be set according to particular problem It is fixed.
Convolution operation has a different modes, and the present invention uses the convolution operation of same forms, when operation to input picture into Row zero padding.The obtained characteristic pattern of convolution operation of same forms is identical as input picture full size.
According to convolutional calculation formula, can obtain, when input data is two dimensional image, the calculating of convolution characteristic pattern element Formula, such as formula (2):
Wherein, ckIndicate k-th of convolution kernel of convolution operation, ck(i, j) indicates ckThe i-th row, jth row element, scIt indicates The length of side of convolution kernel, MCkIndicate input picture I and ckThe convolution characteristic pattern that convolution obtains, MCk(m, n) indicates MCkM rows, n-th The element of row, neighborhood (m, n, sc) indicate centered on (m, n), length of side scNeighborhood,Indicate same shapes The convolution operation of formula accords with.
And if when input data be pass through dry run obtain characteristic pattern when, the calculation formula of convolution characteristic pattern element, such as Formula (3):
To the convolution characteristic pattern M obtained by convolution operationCkIt is operated into line activating, refers to by MCkEach element be input to It is mapped in activation primitive f, such as formula (4):
MAk(m, n)=f (MCk(m,n)). (4)
Wherein, MAkIndicate MCkThe activation characteristic pattern obtained by activation primitive, f indicate activation primitive.
The present invention uses ReLU activation primitives.
F (x)=ReLU (x)=max (0, x) (5)
The activation characteristic pattern M that activation operation is obtainedAkDown-sampling operation is done, feature is mainly reduced by way of sampling Dimension, further compression and abstract characteristics of image.
Down-sampling operates the s that input data is divided into no coincidence firsts×ssFritter, ssIndicate the length of side of sampling core, Then the data of each sub-block are input in sampling function and are mapped, mapping output is the corresponding sampled value of sub-block, such as Formula (6):
MSk(m, n)=s (MAk(ss·(m-1)+1:ss·m,ss·(n-1)+1:ss·n)) (6)
Wherein, MSkIndicate MAkThe sampling characteristic pattern obtained by sampling function, MSk(m, n) indicates MSkM rows, the n-th row Element, s indicate sampling function.Fig. 8 illustrates the process for the input data progress down-sampling operation for being 4 × 4 to size, wherein ss=2.
The present invention is sampled using maximum value.
Maximum value is sampled using the maximum value of sampling block interior element value as the feature of sampling block, such as formula (7):
S (I)=max (I) (7)
In image procossing, maximum value samples the texture information that can extract image, and maintains figure to a certain extent Certain invariance of picture, such as rotation, translation, scaling;In addition, tested according to statistics, for comparing average sample, maximum value Sampling is insensitive to data changes in distribution, and feature extraction is stablized relatively.
After feature extraction, needs to carry out vectorization operation to obtained characteristic pattern, obtain feature vector fea, with Just it inputs the feature into sorter network, and then network parameter is learnt.
Vectorization is operated such as formula (8):
Wherein, v indicates scalar data being stretched as a column vector, and concat indicates to become the vector series connection of instruction One high dimension vector, K indicate the total number of characteristic pattern.
(B) unified dimensional Linear Mapping
Image can obtain series of features figure after have passed through the convolution several times of convolutional network, activation and down-sampling operation, The present invention utilizes Linear Mapping, and the feature of each level is all mapped as to the feature of same dimension.Such as formula (9), n in formulafIt indicates The dimension of unified dimensional feature vector, niIndicate feaiDimension:
(C) softmax sorter networks
Fig. 7 illustrates the basic structure of softmax networks, it is illustrated that in, fiIndicate i-th point of input feature value f Amount, NCIndicate categorical measure, WidIndicate Linear Mapping matrix.
Herein it should be noted that when realizing Linear Mapping using latticed form, it generally can all use and carry deviation (bias) Linear Mapping.Since vectorial addition can realize equivalence by rewriting mapping matrix with map vector multiplication, For the present invention in order to write conveniently, involved all Linear Mapping operation expressions are all made of rewriting form, and directly utilize original Name variable indicates revised mapping matrix and map vector, without embodying bias in expression formula, in formula o expressions linearly reflect Output after penetrating, o in figureiIndicate i-th of component of o.
O=Wid·f (10)
hiIndicate i-th of the component for the network output valve h that o is obtained after the activation of softmax functions:
H=softmax (o) (11)
Wherein, softmax functions are nonlinear activation functions used by softmax networks, and expression formula is:
It can be obtained by formula (12), softmax functions are " non-negative and with normalizing function ", therefore, can be defeated by its function Go out value as " input data belongs to the probability of corresponding class ", i.e.,
hi=P (lablei=1)=P (input ∈ CLASSi). (13)
Wherein,For the two-value of data original tag LABEL (indicating the LABEL people in data set) Vector, such as formula (14);CLASSiIt indicates the i-th class data set, all images of i-th of people is indicated in recognition of face:
Class is the categorised decision that network exports that h is provided according to network:
Identify that the face identity in image is image classification problem, the sorting algorithm that the present invention uses is softmax classification Network, the loss function used is cross entropy loss function, such as formula (16):
Wherein, h is the network output valve vector by softmax functions in sorter network, and label is data original tag The binary set of LABEL.
Since the parameter of network is more, it is easy to overfitting problem occur, therefore be carried out to network parameter using regularization Limitation, to alleviate over-fitting to a certain extent, the present invention uses two norm regularizations.From the description above, network Error can be expressed as formula (17):
J (θ)=loss (h, label)+λ Σ | | θ | |2. (17)
Wherein, J (θ) indicates that network error, λ are regularization coefficient, θ be characterized it is all in learning network can learning parameter Set, such as the Linear Mapping matrix of formula (18), including the convolution kernel of convolutional network, sorter network:
θ={ θc, θid, θc={ c1, c2..., cK, θid=Wid (18)
The learning objective of network is to solve the parameter set θ for minimizing network error (17)opt, such as shown in (19):
In figure 6, J (Θi) indicate the network error that i-th layer of convolutional network calculates, wherein ΘiIt indicates by the 1st layer to i-th All network parameters of layer convolutional network and current layer unified dimensional Linear Mapping matrix WiSet, such as formula (20):
Wherein, θiIndicate i-th layer of convolutional network can learning parameter set, including convolution operation, activation operation and under adopt Sample operation involved in it is all can learning parameter.
(D) multi-layer Fusion Features and dimensionality reduction
As shown in fig. 6, featuremergeIt indicates by each level unified dimensional feature vector fiThe union feature that series connection is formed Vector, i.e.,
WTIt indicates to combining feature vector featuremergeCarry out the mapping matrix of linear dimensionality reduction mapping, fTIndicate by featuremergeBy the multi-feature vector that linear dimensionality reduction maps, it comprises the feature vector of each hierarchical network letters Breath, such as formula (22), n in formulaTIndicate the f of settingTDimension:
J(ΘT) indicate to distribute to multi-feature vector fTSorter network network error;Wherein, ΘTIndicate all volumes The set of product network parameter, all unified Linear Mapping matrixes and linear dimensionality reduction mapping matrix, such as formula (23):
(E) backpropagation of network error
The present invention is updated network parameter using BP algorithm.
According to chain rule, network error is to propagate from back to front.
Sorter network Linear Mapping derivation:
In the sorter network of i-th (i=1 ... 4, T) layer can learning parameter be Wi,id, according to J (Θi) define and asked with chain type Inducing defecation by enema and suppository can then have:
J (Θ can be obtained simultaneouslyi) derivative about f:
Unified dimensional Linear Mapping derivation:
Each unified dimensional Linear Mapping matrix WiIt can be to J (Θi) and J (ΘT) two network errors act, therefore, Using BP algorithm to WiWhen being updated, WiUpdate gradient be by J (Θi) to WiDerivative and J (ΘT) to WiDerivative joint It is formed, meanwhile, in the training process, can assign one weight of each network error to sum up can obtain WiUpdate gradient, Such as formula (26):
Can be had according to chain type Rule for derivation:
Therefore, can have
The linear dimensionality reduction of comprehensive characteristics layer maps derivation:
The linear dimensionality reduction mapping matrix W of comprehensive characteristics layerTOnly to J (ΘT) act, it is easy according to chain type Rule for derivation It obtains:
Meanwhile the derivative that the input feature value of each level unified dimensional Linear Mapping can be calculated is:
Convolutional network parameter derivation:
The parameter that can learn in convolutional network only has the convolution kernel in convolution operation, and therefore, it is necessary to calculate J (Θi) about The update gradient of each level convolution kernel c.Can be had according to chain type Rule for derivation:
Wherein,
Location indicates MSValue in MAIn position binaryzation matrix, i.e.,:
Above-mentioned is that the accumulation of multi-layer Fusion Features is utilized to assign the original that power depth convolutional network carries out the process of feature learning Reason is introduced, and specific algorithm is given below, as shown in table 1:
Table 1 is to assign the process that power depth convolutional network carries out feature learning using the accumulation of multi-layer Fusion Features.
It can be carried out the verification process of the present invention below:
(1) image preprocessing
The present invention uses the Face datection algorithm based on cascade Adaboost to carry out Face datection to image, then utilizes base Positioning feature point carried out to the face that detected in the facial modeling algorithm of SDM, and by image scaling, rotation with Alignment is corrected face and is normalized in translation, finally obtains the facial image that size is 100*100, and in the picture, left The image coordinate of eye is (30,30), and the image coordinate of right eye is (30,70), as shown in Figure 3.
The present invention is pre-processed using simple gray scale normalization, and such as following formula (1), I (i, j) indicates image (i, j) in formula Gray value.The main purpose of gray scale normalization is easy for network processes continuous data and avoids handling larger discrete grey's value, To avoid the occurrence of abnormal conditions.
(2) feature extraction
Characteristics of image is extracted using the network that training is completed
After the training for completing to assign the accumulation of multi-layer Fusion Features power depth convolutional network, so that it may with using training Network extracts the feature of input picture, as shown in table 2:
(3) aspect ratio pair
(I) absolute value normalizes cosine value
Absolute value proposed by the present invention normalization cosine value (cosine normalized by absolute value, cosAN) define such as formula (39):
Wherein,
Experiment shows that absolute value normalization cosine value comparison is sensitive to the long difference of vector field homoemorphism, and it is similar can to make up cosine Degree ignore the long difference of vector field homoemorphism and caused by difference incomplete problem is described.
(II) more differences based on LDA merge alignment algorithm
The cosine similarity for comparing feature, absolute value normalization cosine value and two character modules length are combined as by the present invention One four-dimensional difference vector, i.e.,
fdiff(fT1,fT2)=[cos (fT1,fT2),cosAN(fT2,fT2),|fT2|,|fT2|]T (41)
Then four-dimensional difference vector is fused to one-dimensional analog quantity using LDA (linear discriminant analysis), it is, difference to Measure mapping matrix WLDAIt needs four-dimensional difference vector being mapped as one-dimensional vector.
sim(fT1,fT2)=WLDAfdiff (42)
Wherein WLDAIndicate the map vector obtained using LDA.
The advantageous effect that technical solution of the embodiment of the present invention is brought:
The present embodiment carries out feature learning using the accumulated weights depth convolutional network of multi-layer Fusion Features and feature carries It takes, then utilizes the feature that more differences based on LDA merge two width facial image of alignment algorithm pair to be compared, have following five A advantage:One, the present invention learns using convolutional network and extracts feature automatically, avoids the deficiency of manual features;Two, pass through more Layer sorter network joint training avoids gradient disperse problem;Three, by multi-layer Fusion Features, it is abundant to increase characteristics of image Degree, compensate for general depth network to each hierarchy characteristic processing it is insufficient, description figure is not sufficient enough to merely with high-level characteristic The defect of picture;Four, increase the flexibility ratio of e-learning by being weighted to multistratum classification network error;Five, by based on the more of LDA It is incomplete that difference fusion alignment algorithm solves the problems, such as that cosine similarity portrays feature vector difference.In FERET databases Upper test achieves 99.9%, 100%, 98.8% on four word banks Fb, Fc, DupI, DupII, 99.6% certification respectively Rate (it is 0.1% that mistake, which is sentenced to rate).
The above is the preferred embodiment of the present invention, it is noted that for those skilled in the art For, without departing from the principles of the present invention, it can also make several improvements and retouch, these improvements and modifications It should be regarded as protection scope of the present invention.

Claims (8)

1. a kind of method of face authentication, which is characterized in that including:
It is deep using the multi-layer for first passing through the training of multistratum classification network association in advance to facial image to be certified and facial image template Degree convolutional network extracts the feature vector of multiple levels successively;
The feature vector of multiple levels is passed sequentially through into unified dimensional Linear Mapping matrix and is mapped as unified dimensional feature vector;
Unified dimensional feature vector is connected into union feature vector;
Union feature vector is carried out dimensionality reduction by linear dimensionality reduction mapping matrix to map to obtain multi-feature vector;
By linear discriminant analysis, cosine value is normalized using absolute value, to the comprehensive characteristics of obtained facial image to be certified Certification is compared in the multi-feature vector of vector sum facial image template;
The absolute value normalization cosine value is defined as follows:
It is described by linear discriminant analysis, cosine value is normalized using absolute value, to the synthesis of obtained facial image to be certified Certification is compared in the multi-feature vector of feature vector and facial image template:
Using the multi-feature vector of the obtained multi-feature vector of facial image to be certified and facial image template as input quantity Cosine similarity operation is carried out, cosine similarity is obtained;
Using the multi-feature vector of the obtained multi-feature vector of facial image to be certified and facial image template as input quantity Absolute value normalization cosine operation is carried out, absolute value normalization cosine value is obtained;
Modulus is carried out to the obtained multi-feature vector of facial image to be certified and the multi-feature vector of facial image template It is long with the second mould to obtain the first mould length for operation;
By the cosine similarity, absolute value normalization cosine value, the first mould length and the second mould length one four-dimensional difference of composition Vector;
The difference vector is mapped using difference vector mapping matrix, obtains one-dimensional vector, as comparison score value;
Score value will be compared to be compared with threshold value is compared, compare threshold value if comparing score value and being more than, face authentication passes through.
2. the method for face authentication according to claim 1, which is characterized in that described to facial image to be certified and face Image template extracts multiple layers successively using the multi-layer depth convolutional network for first passing through the training of multistratum classification network association in advance Further include before the feature vector of grade:
Facial image to be certified and facial image template are pre-processed, the pretreatment includes positioning feature point, image calibration Just and normalized.
3. the method for face authentication according to claim 1, which is characterized in that the feature vector of each level passes through as follows Step is calculated:
Convolution operation is carried out to facial image to be certified and facial image template using convolution kernel, obtains convolution characteristic pattern, it is described Convolution operation is same convolution operations;
The convolution characteristic pattern is operated into line activating using activation primitive, obtains activation characteristic pattern, the activation primitive is ReLU activation primitives;
Down-sampling operation is carried out to the activation characteristic pattern using sampling function, obtains sampling characteristic pattern, the down-sampling operation It is sampled for maximum value;
Above-mentioned convolution operation, activation operation and down-sampling operation are repeated to obtained sampling characteristic pattern, obtain new sampling feature Figure, and so repeat several times;
Obtained all sampling characteristic patterns are subjected to vectorization, obtain the feature vector of each level.
4. according to the method for any face authentication in claim 1-3, which is characterized in that the multi-layer depth convolution Network is obtained by softmax sorter network joint trainings, and training step includes:
Extract the feature vector of multiple levels successively using the multi-layer depth convolutional network of initialization to facial image sample;
The feature vector of multiple levels is passed sequentially through into the unified dimensional that unified dimensional Linear Mapping matrix is mapped as same dimension Feature vector;
Unified dimensional feature vector is mapped respectively using Linear Mapping matrix in softmax sorter networks, is reflected Directive amount;
Using softmax function pairs map vector into line activating, network output valve vector is obtained;
Using the label data of network output valve vector sum facial image sample as input quantity, calculated by cross entropy loss function Network error;
Each unified dimensional feature vector is connected into a union feature vector;
The union feature vector is carried out dimensionality reduction by linear dimensionality reduction mapping matrix to map to obtain multi-feature vector;
Weight is distributed for the network error, and calculates the Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear drop Tie up the update gradient of mapping matrix and convolution kernel;
Utilize the update of the Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel Gradient is iterated the Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel Update;
Judge whether network error and iterations meet the requirements, if so, terminating, otherwise, goes to described to facial image sample Extract the feature vector of multiple levels successively using the multi-layer depth convolutional network of initialization.
5. a kind of device of face authentication, which is characterized in that including:
First extraction module, for being joined using the pre- multistratum classification network that first passes through to facial image to be certified and facial image template The multi-layer depth convolutional network for closing training extracts the feature vector of multiple levels successively;
First mapping block is mapped as uniting for the feature vector of multiple levels to be passed sequentially through unified dimensional Linear Mapping matrix Dimension feature vector;
First serial module structure, for unified dimensional feature vector to be connected into union feature vector;
Second mapping block maps to obtain comprehensive spy for union feature vector to be carried out dimensionality reduction by linear dimensionality reduction mapping matrix Sign vector;
First comparing module, for by linear discriminant analysis, cosine value being normalized using absolute value, to obtained people to be certified Certification is compared in the multi-feature vector of face image and the multi-feature vector of facial image template;
The absolute value normalization cosine value is defined as follows:
First comparing module includes:
First computing unit, for the synthesis of the multi-feature vector and facial image template of obtained facial image to be certified Feature vector is that input quantity carries out cosine similarity operation, obtains cosine similarity;
Second computing unit, for the synthesis of the multi-feature vector and facial image template of obtained facial image to be certified Feature vector is that input quantity carries out absolute value normalization cosine operation, obtains absolute value normalization cosine value;
Third computing unit, the synthesis of multi-feature vector and facial image template for the facial image to be certified to obtaining Feature vector carries out modulus operation, and it is long with the second mould to obtain the first mould length;
Secondary vector unit, for the cosine similarity, absolute value normalization cosine value, the first mould length is long with the second mould Form a four-dimensional difference vector;
Map unit obtains one-dimensional vector, as comparison for using difference vector mapping matrix to map the difference vector Score value;
Comparing unit is compared for that will compare score value with threshold value is compared, and compares threshold value if comparing score value and being more than, face is recognized Card passes through.
6. the device of face authentication according to claim 5, which is characterized in that also wrapped before first extraction module It includes:
Preprocessing module, for being pre-processed to facial image to be certified and facial image template, the pretreatment includes special Levy point location, image rectification and normalized.
7. the device of face authentication according to claim 5, which is characterized in that the feature vector of each level passes through as follows Unit is calculated:
Convolution unit is rolled up for carrying out convolution operation to facial image to be certified and facial image template using convolution kernel Product characteristic pattern, the convolution operation are same convolution operations;
Unit is activated, it is described for using activation primitive, into line activating operation, to obtain activation characteristic pattern to the convolution characteristic pattern Activation primitive is ReLU activation primitives;
Sampling unit obtains sampling characteristic pattern, institute for carrying out down-sampling operation to the activation characteristic pattern using sampling function It is that maximum value samples to state down-sampling operation;
Cycling element is operated for repeating above-mentioned convolution operation, activation operation and down-sampling to obtained sampling characteristic pattern, is obtained New sampling characteristic pattern, and so repeat several times;
Primary vector unit, for obtained all sampling characteristic patterns to be carried out vectorization, obtain the feature of each level to Amount.
8. according to the device of any face authentication in claim 5-7, which is characterized in that the multi-layer depth convolution Network is obtained by softmax sorter network joint trainings, including:
Second extraction module is more for being extracted successively using the multi-layer depth convolutional network of initialization to facial image sample The feature vector of a level;
Third mapping block is mapped as phase for the feature vector of multiple levels to be passed sequentially through unified dimensional Linear Mapping matrix With the unified dimensional feature vector of dimension;
4th mapping block, in softmax sorter networks using Linear Mapping matrix respectively to unified dimensional feature to Amount is mapped, and map vector is obtained;
Active module, for, into line activating, obtaining network output valve vector using softmax function pairs map vector;
First computing module, for using the label data of network output valve vector sum facial image sample as input quantity, passing through friendship Fork entropy loss function calculates network error;
Second serial module structure, for each unified dimensional feature vector to be connected into a union feature vector;
5th mapping block, it is comprehensive for mapping to obtain the union feature vector by linear dimensionality reduction mapping matrix progress dimensionality reduction Close feature vector;
Second computing module for distributing weight for the network error, and calculates the Linear Mapping matrix, unified dimensional line The update gradient of property mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel;
Update module, for using the Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear dimensionality reduction mapping matrix and The update gradient of convolution kernel is to the Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear dimensionality reduction mapping matrix and volume Product core is iterated update;
Judgment module, if so, terminating, otherwise, goes to described for judging whether network error and iterations meet the requirements Two extraction modules.
CN201510490244.7A 2015-08-11 2015-08-11 The method and apparatus of face authentication Active CN105138973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510490244.7A CN105138973B (en) 2015-08-11 2015-08-11 The method and apparatus of face authentication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510490244.7A CN105138973B (en) 2015-08-11 2015-08-11 The method and apparatus of face authentication

Publications (2)

Publication Number Publication Date
CN105138973A CN105138973A (en) 2015-12-09
CN105138973B true CN105138973B (en) 2018-11-09

Family

ID=54724317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510490244.7A Active CN105138973B (en) 2015-08-11 2015-08-11 The method and apparatus of face authentication

Country Status (1)

Country Link
CN (1) CN105138973B (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740808B (en) * 2016-01-28 2019-08-09 北京旷视科技有限公司 Face identification method and device
WO2017139927A1 (en) * 2016-02-17 2017-08-24 Intel Corporation Region proposal for image regions that include objects of interest using feature maps from multiple layers of a convolutional neural network model
CN106022215B (en) * 2016-05-05 2019-05-03 北京海鑫科金高科技股份有限公司 Man face characteristic point positioning method and device
CN106067096B (en) * 2016-06-24 2019-09-17 北京邮电大学 A kind of data processing method, apparatus and system
CN106407982B (en) * 2016-09-23 2019-05-14 厦门中控智慧信息技术有限公司 A kind of data processing method and equipment
CN107871101A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN106503669B (en) * 2016-11-02 2019-12-10 重庆中科云丛科技有限公司 Training and recognition method and system based on multitask deep learning network
CN107066934A (en) * 2017-01-23 2017-08-18 华东交通大学 Tumor stomach cell image recognition decision maker, method and tumor stomach section identification decision equipment
CN106960185B (en) * 2017-03-10 2019-10-25 陕西师范大学 The Pose-varied face recognition method of linear discriminant deepness belief network
CN106934373A (en) * 2017-03-14 2017-07-07 重庆文理学院 A kind of library book damages assessment method and system
CN108628868B (en) * 2017-03-16 2021-08-10 北京京东尚科信息技术有限公司 Text classification method and device
CN107133220B (en) * 2017-06-07 2020-11-24 东南大学 Geographic science field named entity identification method
CN107622282A (en) * 2017-09-21 2018-01-23 百度在线网络技术(北京)有限公司 Image verification method and apparatus
CN108764207B (en) * 2018-06-07 2021-10-19 厦门大学 Face expression recognition method based on multitask convolutional neural network
TWI689285B (en) * 2018-11-15 2020-04-01 國立雲林科技大學 Facial symmetry detection method and system thereof
US10846518B2 (en) 2018-11-28 2020-11-24 National Yunlin University Of Science And Technology Facial stroking detection method and system thereof
CN109886335B (en) * 2019-02-21 2021-11-26 厦门美图之家科技有限公司 Classification model training method and device
CN109885578B (en) * 2019-03-12 2021-08-13 西北工业大学 Data processing method, device, equipment and storage medium
CN109934198B (en) * 2019-03-22 2021-05-14 北京市商汤科技开发有限公司 Face recognition method and device
CN110793525A (en) * 2019-11-12 2020-02-14 深圳创维数字技术有限公司 Vehicle positioning method, apparatus and computer-readable storage medium
CN111626889A (en) * 2020-06-02 2020-09-04 小红书科技有限公司 Method and device for predicting categories corresponding to social content
CN113158908A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Face recognition method and device, storage medium and electronic equipment
WO2022263452A1 (en) 2021-06-15 2022-12-22 Trinamix Gmbh Method for authenticating a user of a mobile device
CN114359034B (en) * 2021-12-24 2023-08-08 北京航空航天大学 Face picture generation method and system based on hand drawing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079103A (en) * 2007-06-14 2007-11-28 上海交通大学 Human face posture identification method based on sparse Bayesian regression
CN104200224A (en) * 2014-08-28 2014-12-10 西北工业大学 Valueless image removing method based on deep convolutional neural networks
CN104268524A (en) * 2014-09-24 2015-01-07 朱毅 Convolutional neural network image recognition method based on dynamic adjustment of training targets
CN104616032A (en) * 2015-01-30 2015-05-13 浙江工商大学 Multi-camera system target matching method based on deep-convolution neural network
WO2015101080A1 (en) * 2013-12-31 2015-07-09 北京天诚盛业科技有限公司 Face authentication method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008152530A (en) * 2006-12-18 2008-07-03 Sony Corp Face recognition device, face recognition method, gabor filter applied device, and computer program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079103A (en) * 2007-06-14 2007-11-28 上海交通大学 Human face posture identification method based on sparse Bayesian regression
WO2015101080A1 (en) * 2013-12-31 2015-07-09 北京天诚盛业科技有限公司 Face authentication method and device
CN104200224A (en) * 2014-08-28 2014-12-10 西北工业大学 Valueless image removing method based on deep convolutional neural networks
CN104268524A (en) * 2014-09-24 2015-01-07 朱毅 Convolutional neural network image recognition method based on dynamic adjustment of training targets
CN104616032A (en) * 2015-01-30 2015-05-13 浙江工商大学 Multi-camera system target matching method based on deep-convolution neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卷积神经网络在图像识别上的应用的研究;许可;《中国优秀硕士学位论文全文数据库 信息科技辑》;20130715(第07期);全文 *

Also Published As

Publication number Publication date
CN105138973A (en) 2015-12-09

Similar Documents

Publication Publication Date Title
CN105138973B (en) The method and apparatus of face authentication
Wang et al. Grid-based pavement crack analysis using deep learning
CN105447473B (en) A kind of any attitude facial expression recognizing method based on PCANet-CNN
CN110532920B (en) Face recognition method for small-quantity data set based on FaceNet method
EP3029606A2 (en) Method and apparatus for image classification with joint feature adaptation and classifier learning
CN108427921A (en) A kind of face identification method based on convolutional neural networks
CN110046671A (en) A kind of file classification method based on capsule network
CN106803069A (en) Crowd's level of happiness recognition methods based on deep learning
CN104346440A (en) Neural-network-based cross-media Hash indexing method
CN110321870B (en) Palm vein identification method based on LSTM
CN110349229A (en) A kind of Image Description Methods and device
CN110211127B (en) Image partition method based on bicoherence network
CN107871107A (en) Face authentication method and device
CN105205449A (en) Sign language recognition method based on deep learning
CN105574475A (en) Common vector dictionary based sparse representation classification method
CN108446676A (en) Facial image age method of discrimination based on orderly coding and multilayer accidental projection
CN108537257A (en) The zero sample classification method based on identification dictionary matrix pair
CN108416270A (en) A kind of traffic sign recognition method based on more attribute union features
CN107491729A (en) The Handwritten Digit Recognition method of convolutional neural networks based on cosine similarity activation
Zhai et al. Face verification across aging based on deep convolutional networks and local binary patterns
Li et al. Dating ancient paintings of Mogao Grottoes using deeply learnt visual codes
Wan et al. A novel face recognition method: Using random weight networks and quasi-singular value decomposition
CN110490028A (en) Recognition of face network training method, equipment and storage medium based on deep learning
Jain et al. Comparison among different cnn architectures for signature forgery detection using siamese neural network
CN108520201A (en) A kind of robust human face recognition methods returned based on weighted blend norm

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 100085, 1 floor 8, 1 Street, ten Street, Haidian District, Beijing.

Patentee after: Beijing Eyes Intelligent Technology Co.,Ltd.

Address before: 100085, 1 floor 8, 1 Street, ten Street, Haidian District, Beijing.

Patentee before: BEIJING TECHSHINO TECHNOLOGY Co.,Ltd.

CP01 Change in the name or title of a patent holder
TR01 Transfer of patent right

Effective date of registration: 20220401

Address after: 071800 Beijing Tianjin talent home (Xincheng community), West District, Xiongxian Economic Development Zone, Baoding City, Hebei Province

Patentee after: BEIJING EYECOOL TECHNOLOGY Co.,Ltd.

Patentee after: Beijing Eyes Intelligent Technology Co.,Ltd.

Address before: 100085, 1 floor 8, 1 Street, ten Street, Haidian District, Beijing.

Patentee before: Beijing Eyes Intelligent Technology Co.,Ltd.

TR01 Transfer of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Method and device for face authentication

Effective date of registration: 20220614

Granted publication date: 20181109

Pledgee: China Construction Bank Corporation Xiongxian sub branch

Pledgor: BEIJING EYECOOL TECHNOLOGY Co.,Ltd.

Registration number: Y2022990000332

PE01 Entry into force of the registration of the contract for pledge of patent right