CN105138973A - Face authentication method and device - Google Patents

Face authentication method and device Download PDF

Info

Publication number
CN105138973A
CN105138973A CN201510490244.7A CN201510490244A CN105138973A CN 105138973 A CN105138973 A CN 105138973A CN 201510490244 A CN201510490244 A CN 201510490244A CN 105138973 A CN105138973 A CN 105138973A
Authority
CN
China
Prior art keywords
vector
facial image
mapping matrix
feature vector
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510490244.7A
Other languages
Chinese (zh)
Other versions
CN105138973B (en
Inventor
郇淑雯
毛秀萍
张伟琳
朱和贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eyes Intelligent Technology Co ltd
Beijing Eyecool Technology Co Ltd
Original Assignee
Beijing Techshino Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Techshino Technology Co Ltd filed Critical Beijing Techshino Technology Co Ltd
Priority to CN201510490244.7A priority Critical patent/CN105138973B/en
Publication of CN105138973A publication Critical patent/CN105138973A/en
Application granted granted Critical
Publication of CN105138973B publication Critical patent/CN105138973B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions

Abstract

The invention discloses face authentication method and device, and belongs to the field of biological recognition. The method comprises the following steps: sequentially extracting multiple levels of feature vectors from a to-be-authenticated face image and a face image template with a multi-level depth convolutional network which is subjected to multi-layer classified network joint training in advance; sequentially mapping the multiple levels of feature vectors into unified dimension feature vectors through a unified dimension linear mapping matrix; connecting the unified dimension feature vectors into joint feature vectors in series; carrying out dimension-reducing mapping on the joint feature vectors through a linear dimension-reducing mapping matrix; and normalizing cosine values with absolute values through linear discriminant analysis, and carrying out comparison and authentication on the obtained comprehensive feature vectors of the to-be-authenticated face image and comprehensive feature vectors of the face image template. Compared with the prior art, the face authentication method disclosed by the invention is high in anti-jamming capability, good in expandability and high in authentication accuracy rate.

Description

The method and apparatus of face authentication
Technical field
The present invention relates to field of biological recognition, refer to a kind of method and apparatus of face authentication especially.
Background technology
Face authentication is a kind of form of bio-identification, by effectively characterizing face, obtains the feature of two width face picture, utilizes sorting algorithm to judge whether these two photos are same persons.Generally in face identification device, be previously stored with a width facial image, as facial image template; When certification, take a width facial image, as facial image to be certified, extract the feature of two width images, utilize sorting algorithm to judge whether these two photos are same persons.
The method extracting feature is: engineer goes out a proper vector, the proper vector of regulation is taken out by various algorithm, as the face authentication method based on geometric properties, the face authentication method based on subspace, face authentication method etc. based on signal transacting, but this method is as easy as rolling off a log is subject to the impact of the factor such as illumination, expression on result, poor anti jamming capability, and mostly the proper vector that engineer goes out is based in specific situation, poor expandability.
Can automatic learning extract feature based on the recognition of face of degree of depth network and authentication techniques, but general degree of depth network also exists gradient disperse problem, and to each hierarchy characteristic process with understand insufficient, only utilize high-level characteristic to be not enough to abundant Description Image.
Summary of the invention
The invention provides a kind of method and apparatus of face authentication, the method antijamming capability is strong, and extensibility is good, and certification accuracy rate is high.
For solving the problems of the technologies described above, the invention provides technical scheme as follows:
A method for face authentication, comprising:
The multi-layer degree of depth convolutional network of training through multistratum classification network association is in advance used to extract the proper vector of multiple level successively to facial image to be certified and facial image template;
The proper vector of multiple level is mapped as unified dimensional proper vector by unified dimensional linear mapping matrix successively;
Unified dimensional proper vector is connected into union feature vector;
Union feature vector is carried out dimensionality reduction mapping by linear dimensionality reduction mapping matrix and obtains multi-feature vector;
By linear discriminant analysis, utilize absolute value normalization cosine value, to the certification of comparing of the multi-feature vector of the facial image to be certified obtained and the multi-feature vector of facial image template.
A device for face authentication, comprising:
First extraction module, for using the multi-layer degree of depth convolutional network of training through multistratum classification network association in advance to extract the proper vector of multiple level successively to facial image to be certified and facial image template;
First mapping block, for being mapped as unified dimensional proper vector by unified dimensional linear mapping matrix successively by the proper vector of multiple level;
First serial module structure, for being connected into union feature vector by unified dimensional proper vector;
Second mapping block, obtains multi-feature vector for union feature vector is carried out dimensionality reduction mapping by linear dimensionality reduction mapping matrix;
First comparing module, for by linear discriminant analysis, utilizes absolute value normalization cosine value, to the certification of comparing of the multi-feature vector of the facial image to be certified obtained and the multi-feature vector of facial image template.
The present invention has following beneficial effect:
In the method for face authentication of the present invention, first the multi-layer degree of depth convolutional network of training through multistratum classification network association is in advance used to extract the proper vector of multiple levels of facial image to be certified and facial image template, then the proper vector of multiple level is mapped as unified dimensional proper vector by unified dimensional linear mapping matrix successively, again unified dimensional proper vector is connected into union feature vector, and by union feature vector by linear dimensionality reduction mapping matrix carry out dimensionality reduction map obtain multi-feature vector, finally by linear discriminant analysis, utilize absolute value normalization cosine value, to the certification of comparing of the multi-feature vector of the facial image to be certified obtained and the multi-feature vector of facial image template.
Compared with prior art, the present invention is by multi-layer degree of depth convolutional network automatic learning and extract feature, and compared with going out a proper vector with engineer in prior art, anti-interference energy is strong, and extensibility is good, and certification accuracy rate is high.
Multi-layer degree of depth convolutional network of the present invention carries out joint training by multistratum classification network and obtains, and avoid gradient disperse problem, certification accuracy rate is high.
And the proper vector of multiple level is merged, increases characteristics of image richness, compensate for general degree of depth network insufficient to each hierarchy characteristic process, only utilize high-level characteristic to be not enough to the defect of abundant Description Image; Further increase certification accuracy rate.
Inventor also finds, traditional comparison authentication method, especially cosine similarity method, have ignored the long difference of vector field homoemorphism, thus the description that makes a difference is comprehensive, reduces the accuracy rate of comparison certification; The present invention adopts linear discriminant analysis, compares, further improve certification accuracy rate to the multiple difference characteristics comprising absolute value normalization cosine value.
Therefore the method antijamming capability of face authentication of the present invention is strong, extensibility is good, and certification accuracy rate is high, and avoids gradient disperse problem, makes up the defect utilizing high-level characteristic to be not enough to abundant Description Image.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the method for face authentication of the present invention;
Fig. 2 is the schematic diagram of the device of face authentication of the present invention;
Fig. 3 is the schematic diagram of Image semantic classification in the present invention;
Fig. 4 is to the schematic diagram that multi-layer degree of depth convolutional network and sorter network are trained in the present invention;
Fig. 5 is the structural representation of the basic convolutional network in the present invention;
Fig. 6 is the multi-layer degree of depth convolutional network schematic diagram in the present invention;
Fig. 7 is the sorter network schematic diagram in the present invention;
Fig. 8 is the down-sampling operation chart in the present invention.
Embodiment
For making the technical problem to be solved in the present invention, technical scheme and advantage clearly, be described in detail below in conjunction with the accompanying drawings and the specific embodiments.
On the one hand, the method for a kind of face authentication of the present invention, as shown in Figure 1, comprising:
Step S101: use the multi-layer degree of depth convolutional network of training through multistratum classification network association in advance to extract the proper vector of multiple level successively to facial image to be certified and facial image template;
Multi-layer degree of depth convolutional network comprises the convolutional network of more than 2, and each convolutional network comprises convolution, activation and down-sampling operation, and order and the quantity of these operations are not fixed, and determine according to actual conditions; Each convolutional network of the present invention all extracts a proper vector, can be designated as fea 1, fea 2, fea 3(only list the proper vector of 1 group of multiple level here, the i.e. proper vector of multiple levels of facial image to be certified or facial image template, formula hereinafter also only writes out the formula of an image), first convolutional network be input as facial image to be certified or facial image template, a rear convolutional network be input as previous convolutional network operation after characteristic pattern;
General degree of depth network also exists gradient disperse problem, and multi-layer degree of depth convolutional network of the present invention carries out joint training by multistratum classification network and obtains, and avoids the problems referred to above.
Step S102: the proper vector of multiple level is mapped as unified dimensional proper vector by unified dimensional linear mapping matrix successively; Unified dimensional linear mapping matrix is through that training in advance obtains, and can be designated as W 1, W 2, W 3..., unified dimensional proper vector can be designated as f 1, f 2, f 3...
Step S103: unified dimensional proper vector is connected into union feature vector; Feature can be designated as merge.
Step S104: union feature vector is carried out dimensionality reduction mapping by linear dimensionality reduction mapping matrix and obtains multi-feature vector; Linear dimensionality reduction mapping matrix is through that training in advance obtains, and can be designated as W t, multi-feature vector can be designated as f t.
Step S105: by linear discriminant analysis, utilizes absolute value normalization cosine value, to the certification of comparing of the multi-feature vector of the facial image to be certified obtained and the multi-feature vector of facial image template.
In the method for face authentication of the present invention, first the multi-layer degree of depth convolutional network of training through multistratum classification network association is in advance used to extract the proper vector of multiple levels of facial image to be certified and facial image template, then the proper vector of multiple level is mapped as unified dimensional proper vector by unified dimensional linear mapping matrix successively, again unified dimensional proper vector is connected into union feature vector, and by union feature vector by linear dimensionality reduction mapping matrix carry out dimensionality reduction map obtain multi-feature vector, finally by linear discriminant analysis, utilize absolute value normalization cosine value, to the certification of comparing of the multi-feature vector of the facial image to be certified obtained and the multi-feature vector of facial image template.
Compared with prior art, the present invention is by multi-layer degree of depth convolutional network automatic learning and extract feature, and compared with going out a proper vector with engineer in prior art, anti-interference energy is strong, and extensibility is good, and certification accuracy rate is high.
Multi-layer degree of depth convolutional network of the present invention carries out joint training by multistratum classification network and obtains, and avoid gradient disperse problem, certification accuracy rate is high.
And the proper vector of multiple level is merged, increases characteristics of image richness, compensate for general degree of depth network insufficient to each hierarchy characteristic process, only utilize high-level characteristic to be not enough to the defect of abundant Description Image; Further increase certification accuracy rate.
Inventor also finds, traditional comparison authentication method, especially cosine similarity method, have ignored the long difference of vector field homoemorphism, thus the description that makes a difference is comprehensive, reduces the accuracy rate of comparison certification; The present invention adopts linear discriminant analysis, compares, further improve certification accuracy rate to the multiple difference characteristics comprising absolute value normalization cosine value.
Therefore the method antijamming capability of face authentication of the present invention is strong, extensibility is good, and certification accuracy rate is high, and avoids gradient disperse problem, makes up the defect utilizing high-level characteristic to be not enough to abundant Description Image.
One as the method for face authentication of the present invention is improved, and also comprises before step S101:
Step S100: carry out pre-service to facial image to be certified and facial image template, pre-service comprises positioning feature point, image rectification and normalized.In fact, facial image template may cross pre-service through prior, can not carry out this step.
The present invention adopts the Face datection algorithm based on cascade Adaboost to carry out Face datection to image, then the facial modeling algorithm based on SDM is utilized to carry out positioning feature point to the face detected, and by image scaling, rotation and translation, face is corrected and normalization alignment, as shown in Figure 3.
The present invention adopts simple gray scale normalization pre-service, and the fundamental purpose of gray scale normalization is convenient to network processes continuous data and avoids processing larger discrete grey value, thus avoid occurring abnormal conditions.
The present invention carries out pre-service to facial image can facilitate follow-up verification process, and avoids the impact of extraordinary image vegetarian refreshments on authentication result.
Another kind as the method for face authentication of the present invention improves, and each convolutional network comprises the operation of convolution operation, activation manipulation and down-sampling, and the proper vector of each level calculates as follows:
Step S1011: use convolution kernel to carry out convolution operation to facial image to be certified and facial image template, obtain convolution characteristic pattern, convolution operation is same convolution operation;
The present invention adopts the convolution operation of same form, carries out zero padding during operation to input picture.The characteristic pattern that the convolution operation of same form obtains is identical with input picture life size.
Step S1012: use activation function to carry out activation manipulation to convolution characteristic pattern, obtain activating characteristic pattern, activation function is ReLU activation function.
Step S1013: use sampling function to carry out down-sampling operation to activation characteristic pattern, obtain characteristic pattern of sampling, down-sampling is operating as maximal value sampling;
The present invention adopts maximal value to sample, maximal value sampling is using the feature of the maximal value of sampling block interior element value as sampling block, and in image procossing, maximal value sampling can extract the texture information of image, and maintain certain unchangeability of image to a certain extent, as rotation, translation, convergent-divergent etc.; In addition, according to statistics experiment, compare average sample, maximal value sampling is insensitive to data changes in distribution, and feature extraction is relatively stable.
Step S1014: above-mentioned steps is repeated to the sampling characteristic pattern obtained, obtains new sampling characteristic pattern, and so repeat several times;
Step S1015: all sampling characteristic patterns obtained are carried out vectorization, obtains the proper vector of each level, all sampling characteristic patterns each step obtained form a vector.
The present invention can extract feature rich and stable proper vector, can describe facial image fully, add certification accuracy rate.
As another improvement of the method for face authentication of the present invention, multi-layer degree of depth convolutional network is obtained by the joint training of softmax sorter network, comprising:
During training, first to there is facial image Sample Storehouse, then use initialized multi-layer degree of depth convolutional network to extract the proper vector of multiple level successively to facial image sample; Be the same with aforesaid step S101, be only verification process above, be training process here, the parameters in multi-layer degree of depth convolutional network now gets initial value;
The proper vector of multiple level is mapped as successively the unified dimensional proper vector of same dimension by unified dimensional linear mapping matrix;
In softmax sorter network, use linear mapping matrix to map unified dimensional proper vector respectively, obtain mapping vector; Linear mapping matrix now gets initial value;
Use softmax function mapping directive amount to activate, obtain network output valve vector;
With the label data of network output valve vector sum facial image sample for input quantity, calculate network error by cross entropy loss function;
Each unified dimensional proper vector is connected into a union feature vector;
Union feature vector is carried out dimensionality reduction mapping by linear dimensionality reduction mapping matrix and obtains multi-feature vector;
For network error assigns weight, and calculate the renewal gradient of linear mapping matrix, unified dimensional linear mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel;
The renewal gradient of linear mapping matrix, unified dimensional linear mapping matrix, linearly dimensionality reduction mapping matrix and convolution kernel is utilized to carry out iteration renewal to linear mapping matrix, unified dimensional linear mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel;
Judge whether network error and iterations meet the requirements, and if so, terminate, otherwise, go to and use initialized multi-layer degree of depth convolutional network to extract the proper vector of multiple level successively to facial image sample.
Network error meets the requirements and refers to network error value minimum (or to a certain extent little), and now the parameters (linear mapping matrix, unified dimensional linear mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel) of multi-layer degree of depth convolutional network and softmax sorter network is the multi-layer degree of depth convolutional network after training and softmax sorter network; Iterations meets the requirements and refers to that iterations reaches setting value.
The present invention carries out joint training by softmax sorter network, further avoid gradient disperse problem, and can further by the flexibility ratio weighting of sorter network error being increased to e-learning.
As another improvement of the method for face authentication of the present invention, step S105 comprises:
Step S1051: carry out cosine similarity operation with the multi-feature vector of the multi-feature vector of the facial image to be certified obtained and facial image template for input quantity, obtain cosine similarity;
Step S1052: carry out the operation of absolute value normalization cosine with the multi-feature vector of the multi-feature vector of the facial image to be certified obtained and facial image template for input quantity, obtain absolute value normalization cosine value;
Step S1053: ask modulo operation to the multi-feature vector of the facial image to be certified obtained and the multi-feature vector of facial image template, obtains the first mould long long with the second mould;
Step S1054: by cosine similarity, absolute value normalization cosine value, the long difference vector four-dimensional with the long composition of the second mould one of the first mould;
Step S1055: difference vector maps by usage variance DUAL PROBLEMS OF VECTOR MAPPING matrix, obtains one-dimensional vector, as comparison score value;
Step S1056: comparison score value and comparison threshold value are compared, if comparison score value is greater than comparison threshold value, then face authentication passes through.
Inventor finds, traditional comparison authentication method, especially cosine similarity method, have ignored the long difference of vector field homoemorphism, thus the description that makes a difference is comprehensive, reduces the accuracy rate of comparison certification; The contrast of absolute value normalization cosine value is responsive to the long difference of vector field homoemorphism, can make up that cosine similarity ignores the long difference of vector field homoemorphism and the difference that causes describes incomplete problem.
Therefore the cosine similarity of comparison feature, absolute value normalization cosine value and two character modules length are combined as a four-dimensional difference vector by the present invention, carry out linear discriminant analysis, further improve certification accuracy rate.
On the other hand, the invention provides a kind of device of face authentication, as shown in right Fig. 2, comprising:
First extraction module 11, for using the multi-layer degree of depth convolutional network of training through multistratum classification network association in advance to extract the proper vector of multiple level successively to facial image to be certified and facial image template;
First mapping block 12, for being mapped as unified dimensional proper vector by unified dimensional linear mapping matrix successively by the proper vector of multiple level;
First serial module structure 13, for being connected into union feature vector by unified dimensional proper vector;
Second mapping block 14, obtains multi-feature vector for union feature vector is carried out dimensionality reduction mapping by linear dimensionality reduction mapping matrix;
First comparing module 15, for by linear discriminant analysis, utilizes absolute value normalization cosine value, to the certification of comparing of the multi-feature vector of the facial image to be certified obtained and the multi-feature vector of facial image template.
The device antijamming capability of face authentication of the present invention is strong, and extensibility is good, and certification accuracy rate is high, and avoids gradient disperse problem, makes up the defect utilizing high-level characteristic to be not enough to abundant Description Image.
One as the device of face authentication of the present invention is improved, and also comprises before the first extraction module:
Pretreatment module, for carrying out pre-service to facial image to be certified and facial image template, pre-service comprises positioning feature point, image rectification and normalized.
The present invention carries out pre-service to facial image can facilitate follow-up verification process, and avoids the impact of extraordinary image vegetarian refreshments on authentication result.
Another kind as the device of face authentication of the present invention improves, and the proper vector of each level is calculated by such as lower unit:
Convolution unit, for using convolution kernel to carry out convolution operation to facial image to be certified and facial image template, obtain convolution characteristic pattern, convolution operation is same convolution operation;
Activate unit, for using activation function to carry out activation manipulation to convolution characteristic pattern, obtain activating characteristic pattern, activation function is ReLU activation function;
Sampling unit, for using sampling function to carry out down-sampling operation to activation characteristic pattern, obtain characteristic pattern of sampling, down-sampling is operating as maximal value sampling;
Cycling element, for repeating above-mentioned steps to the sampling characteristic pattern obtained, obtaining new sampling characteristic pattern, and so repeating several times;
Primary vector unit, for all sampling characteristic patterns obtained are carried out vectorization, obtains the proper vector of each level.
The present invention can extract feature rich and stable proper vector, can describe facial image fully, add certification accuracy rate.
As another improvement of the device of face authentication of the present invention, multi-layer degree of depth convolutional network is obtained by the joint training of softmax sorter network, comprising:
Second extraction module, extracts the proper vector of multiple level successively for using initialized multi-layer degree of depth convolutional network to facial image sample;
3rd mapping block, for being mapped as the unified dimensional proper vector of same dimension successively by unified dimensional linear mapping matrix by the proper vector of multiple level;
4th mapping block, for using linear mapping matrix to map unified dimensional proper vector respectively in softmax sorter network, obtains mapping vector;
Active module, for using softmax function mapping directive amount to activate, obtains network output valve vector;
First computing module, for the label data of network output valve vector sum facial image sample for input quantity, calculate network error by cross entropy loss function;
Second serial module structure, for being connected into a union feature vector by each unified dimensional proper vector;
5th mapping block, obtains multi-feature vector for union feature vector is carried out dimensionality reduction mapping by linear dimensionality reduction mapping matrix;
Second computing module, for assigning weight for network error, and calculates the renewal gradient of linear mapping matrix, unified dimensional linear mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel;
Update module, carries out iteration renewal for utilizing the renewal gradient of linear mapping matrix, unified dimensional linear mapping matrix, linearly dimensionality reduction mapping matrix and convolution kernel to linear mapping matrix, unified dimensional linear mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel;
Judge module, for judging whether network error and iterations meet the requirements, and if so, terminate, otherwise, go to the second extraction module.
The present invention carries out joint training by softmax sorter network, further avoid gradient disperse problem, and can further by the flexibility ratio weighting of sorter network error being increased to e-learning.
As another improvement of the device of face authentication of the present invention, the first comparing module comprises:
First computing unit, for carrying out cosine similarity operation with the multi-feature vector of the multi-feature vector of the facial image to be certified obtained and facial image template for input quantity, obtains cosine similarity;
Second computing unit, for carrying out the operation of absolute value normalization cosine with the multi-feature vector of the multi-feature vector of the facial image to be certified obtained and facial image template for input quantity, obtains absolute value normalization cosine value;
3rd computing unit, for asking modulo operation to the multi-feature vector of the facial image to be certified obtained and the multi-feature vector of facial image template, obtains the first mould long long with the second mould;
Secondary vector unit, for growing the difference vector four-dimensional with the long composition of the second mould one by cosine similarity, absolute value normalization cosine value, the first mould;
Map unit, maps difference vector for usage variance DUAL PROBLEMS OF VECTOR MAPPING matrix, obtains one-dimensional vector, as comparison score value;
Comparing unit, for comparison score value and comparison threshold value being compared, if comparison score value is greater than comparison threshold value, then face authentication passes through.
The cosine similarity of comparison feature, absolute value normalization cosine value and two character modules length are combined as a four-dimensional difference vector by the present invention, carry out linear discriminant analysis, further improve certification accuracy rate.
With a specific embodiment, present invention is described below:
This present invention needed to train before certification, and as shown in Figure 4, training process is as follows for concrete flow process:
First the present invention proposes new convolutional network and extracts image feature vector, power degree of depth convolutional network (multi-layer degree of depth convolutional network) is composed in the accumulation of multi-layer Fusion Features, then utilizes the learning process shown in softmax network and Fig. 3 to carry out feature learning to image.
Network learning procedure mainly comprises the forward calculation of network and the back-propagating of network error.
(A) convolutional network forward calculation
Basic convolutional network (as shown in Figure 5, notice that Fig. 5 is the example of a convolutional network, it not the convolutional network that the present invention uses, convolutional network of the present invention is: convolution, activation, down-sampling,) comprise the operation of convolution operation, activation manipulation and down-sampling, in order to subsequent calculations is convenient, generally also need to carry out vectorization operation.In figure 6, the convolutional network of every one deck all represents a basic convolutional network, and order and the number of times of the various operations wherein comprised all can set according to particular problem.
Convolution operation has different modes, and the present invention adopts the convolution operation of same form, carries out zero padding during operation to input picture.The characteristic pattern that the convolution operation of same form obtains is identical with input picture life size.
According to convolutional calculation formula, can obtain, when input data are two dimensional image, the computing formula of convolution characteristic pattern element, such as formula (2):
M C k ( m , n ) = I ( n e i g h b o r h o o d ( m , n , s c ) ) s a m e * c k = Σ i = - s c + 1 2 s c + 1 2 - 1 Σ j = - s c + 1 2 s c + 1 2 - 1 I ( m + i , n + j ) · c k ( s c + 1 2 - i , s c + 1 2 - j ) . - - - ( 2 )
Wherein, c krepresent a convolution operation kth convolution kernel, c k(i, j) represents c kthe i-th row, jth row element, s crepresent the length of side of convolution kernel, M ckrepresent input picture I and c kthe convolution characteristic pattern that convolution obtains, M ck(m, n) represents M ckm is capable, the element of the n-th row, neighborhood (m, n, s c) represent centered by (m, n), the length of side is s cneighborhood, represent the convolution operation symbol of same form.
And when input data are the characteristic pattern obtained through some operations, the computing formula of convolution characteristic pattern element, such as formula (3):
M C k ( i , j ) = Σ p = 1 c m Σ x = - s c + 1 2 + 1 s c + 1 2 - 1 Σ y = - s c + 1 2 + 1 s c + 1 2 - 1 M p ( i + x , j + y ) · c p k ( s c + 1 2 - x , s c + 1 2 - y ) . - - - ( 3 )
To the convolution characteristic pattern M obtained by convolution operation ckcarry out activation manipulation, refer to M ckeach element be input in activation function f and map, such as formula (4):
M Ak(m,n)=f(M Ck(m,n)).(4)
Wherein, M akrepresent M ckthrough the activation characteristic pattern that activation function obtains, f represents activation function.
The present invention adopts ReLU activation function.
f(x)=ReLU(x)=max(0,x).(5)
To the activation characteristic pattern M that activation manipulation obtains akdo down-sampling operation, mainly reduced the dimension of feature by the mode of sampling, further compression and abstract characteristics of image.
First input Data Placement is become the s without overlapping by down-sampling operation s× s sfritter, s srepresent the length of side of sampling core, then the data of each sub-block be input in sampling function and map, map output and be sampled value corresponding to sub-block, such as formula (6):
M Sk(m,n)=s(M Ak(s s·(m-1)+1:s s·m,s s·(n-1)+1:s s·n))(6)
Wherein, M skrepresent M akthrough the sampling characteristic pattern that sampling function obtains, M sk(m, n) represents M skm is capable, the element of the n-th row, and s represents sampling function.Fig. 8 illustrates the process of the size input data that are 4 × 4 being carried out to down-sampling operation, wherein s s=2.
The present invention adopts maximal value to sample.
Maximal value is sampled the feature of the maximal value of sampling block interior element value as sampling block, such as formula (7):
s(I)=max(I)(7)
In image procossing, maximal value sampling can extract the texture information of image, and maintains certain unchangeability of image to a certain extent, as rotation, translation, convergent-divergent etc.; In addition, according to statistics experiment, compare average sample, maximal value sampling is insensitive to data changes in distribution, and feature extraction is relatively stable.
After feature extraction terminates, need to carry out vectorization operation to the characteristic pattern obtained, obtain proper vector fea, feature is input in sorter network, and then network parameter is learnt.
Vectorization operation is such as formula (8):
f e a = c o n c a t k = 1... K ( v ( M S k ) ) . - - - ( 8 )
Wherein, v represents scalar data is stretched as a column vector, and concat represents becomes a high dimension vector, total number of K representation feature figure by the series connection of the vector of instruction.
(B) unified dimensional linear mapping
Image can obtain series of features figure after have passed through the several times convolution of convolutional network, activation and down-sampling operation, and the present invention utilizes linear mapping, the feature of each level is all mapped as the feature of same dimension.Such as formula (9), n in formula frepresent the dimension of unified dimensional proper vector, n irepresent fea idimension:
f i = W i fea i , W i ∈ R n f × n i . - - - ( 9 )
(C) softmax sorter network
Fig. 7 illustrates the basic structure of softmax network, in diagram, and f irepresent i-th component of input feature value f, N crepresent categorical measure, W idrepresent linear mapping matrix.
It should be noted that herein, when utilizing latticed form to realize linear mapping, generally all can adopt the linear mapping with deviation (bias).Because vectorial addition can realize equivalence by rewriting mapping matrix with mapping vector multiplication, therefore, the present invention is in order to write conveniently, involved all linear mapping operation expressions all adopt rewriting form, and directly utilize former name variable represent revised mapping matrix and map vector, and in expression formula, not embodying bias, in formula, o represents the output after linear mapping, o in figure irepresent i-th component of o.
o=W id·f(10)
H irepresent i-th component of the network output valve h that o obtains after softmax function activates:
h=softmax(o).(11)
Wherein, softmax function is the nonlinear activation function that softmax network adopts, and its expression formula is:
s o f t max ( x ) = e x Σ i e x i - - - ( 12 )
Can be drawn by formula (12), softmax function is " non-negative and and normalizing function ", therefore, and can using its function-output as " input data belong to the probability of corresponding class ", namely
h i=P(lable i=1)=P(input∈CLASS i).(13)
Wherein, for the binary set of data original tag LABEL (representing LABEL people of data centralization), such as formula (14); CLASS irepresent the i-th class data set, in recognition of face, represent all images of i-th people:
Class is the categorised decision that network provides according to network output h:
c l a s s = arg max c ( h c ) . - - - ( 15 )
Face identity in recognition image is Images Classification problem, and the sorting algorithm that the present invention adopts is softmax sorter network, and its loss function adopted is cross entropy loss function, such as formula (16):
l o s s ( h , l a b e l ) = - Σ i = 1 N C ( label i log ( h i ) ) . - - - ( 16 )
Wherein, h is the network output valve vector through softmax function in sorter network, and label is the binary set of data original tag LABEL.
Because the parameter of network is more, be easy to occur over-fitting problem, therefore adopt regularization to limit network parameter, thus alleviate Expired Drugs to a certain extent, the present invention adopts two norm regularizations.According to above explanation, the error of network can be expressed as formula (17):
J(θ)=loss(h,label)+λΣ||θ|| 2.(17)
Wherein, J (θ) represents network error, and λ is regularization coefficient, θ be all in feature learning network can the set of learning parameter, such as formula (18), comprise the linear mapping matrix of the convolution kernel of convolutional network, sorter network:
θ={θ c,θ id},θ c={c 1,c 2,...,c K},θ id=W id(18)
The learning objective of network is for solving the parameter set θ minimizing network error (17) opt, as shown in (19):
θ o p t = arg min θ J ( θ ) - - - ( 19 )
In figure 6, J (Θ i) represent the network error that i-th layer of convolutional network calculates, wherein, Θ irepresent by all network parameters of the 1st layer to i-th layer convolutional network and current layer unified dimensional linear mapping matrix W iset, such as formula (20):
Θ i = ∪ k = 1... i θ k ∪ W i . - - - ( 20 )
Wherein, θ iwhat represent i-th layer of convolutional network can learning parameter set, and comprising relate in the operation of convolution operation, activation manipulation and down-sampling all can learning parameter.
(D) multi-layer Fusion Features and dimensionality reduction
As shown in Figure 6, feature mergerepresent by each level unified dimensional proper vector f ithe union feature vector that series connection is formed, namely
feature m e r g e = c o n c a t i = 1 , ... , 4 ( f i ) . - - - ( 21 )
W trepresent associating proper vector feature mergecarry out the mapping matrix of linear dimensionality reduction mapping, f trepresent by feature mergemap the multi-feature vector obtained through linear dimensionality reduction, it comprises the eigenvector information of each hierarchical network, such as formula (22), n in formula trepresent the f of setting tdimension:
f T = W T feature m e r g e , W T ∈ R n T × ( 4 n f ) . - - - ( 22 )
J (Θ t) represent and distribute to multi-feature vector f tthe network error of sorter network; Wherein, Θ trepresent the set of all convolutional network parameters, all unified linear mapping matrixes and linear dimensionality reduction mapping matrix, such as formula (23):
Θ T = ∪ k = 1...4 θ k ∪ q = 1..4 W q ∪ W T . - - - ( 23 )
(E) backpropagation of network error
The present invention utilizes BP algorithm to upgrade network parameter.
According to chain rule, network error propagates from back to front.
The differentiate of sorter network linear mapping:
I-th (i=1 ... 4, T) layer sorter network in can learning parameter be W i, id, according to J (Θ i) definition can have with chain type Rule for derivation:
▿ W c j i , i d = - label c · ( 1 - h c ) · f j + 2 λ · W c j i , i d . - - - ( 24 )
J (Θ can be obtained simultaneously i) about the derivative of f:
∂ J ( Θ i ) ∂ f j = Σ c = 1 N C [ - label c ( 1 - h c ) · W c j i , i d ] - - - ( 25 )
The differentiate of unified dimensional linear mapping:
Each unified dimensional linear mapping matrix W ican to J (Θ i) and J (Θ t) two network error generation effects, therefore, utilize BP algorithm to W iwhen upgrading, W irenewal gradient be by J (Θ i) to W iderivative and J (Θ t) to W iderivative combine formation, meanwhile, in the training process, can give each network error a weight, to sum up, can W be obtained irenewal gradient, such as formula (26):
▿ W i = w i · ∂ J ( Θ i ) ∂ W i + w T · ∂ J ( Θ T ) ∂ W i - - - ( 26 )
Can have according to chain type Rule for derivation:
∂ J ( Θ i ) ∂ W i = ∂ J ( Θ i ) ∂ f i · ∂ f i ∂ W i = ∂ J ( Θ i ) ∂ f i × fea i T - - - ( 27 )
∂ J ( Θ T ) ∂ W i = ∂ J ( Θ T ) ∂ f T · ∂ f ∂ feature m e r g e · ∂ feature m e r g e ∂ f i · ∂ f i ∂ W i = [ W T T × ∂ J ( Θ T ) ∂ f T ] n f ( i - 1 ) + 1 : n f · i × fea i T . - - - ( 28 )
Therefore, can have
▿ W i = w i · ∂ J ( Θ i ) ∂ f i · fea i T + w T · [ W T T ∂ J ( Θ T ) ∂ f T ] ( n f ( i - 1 ) + 1 : n f · i ) fea i T - - - ( 29 )
The linear dimensionality reduction of comprehensive characteristics layer maps differentiate:
The linear dimensionality reduction mapping matrix W of comprehensive characteristics layer tonly to J (Θ t) generation effect, easily obtain according to chain type Rule for derivation:
▿ W T = w T · ∂ J ( Θ T ) ∂ W T = w T · ∂ J ( Θ T ) ∂ f T · ∂ f T ∂ W T = w T ( ∂ J ( Θ T ) ∂ f T × feature m e r g e T ) - - - ( 30 )
Meanwhile, the derivative that can calculate the input feature value of each level unified dimensional linear mapping is:
∂ J ( Θ i ) ∂ fea i = ∂ J ( Θ i ) ∂ f i ∂ f i ∂ fea i ( i = 1 , 2 , 3 , 4 , T ) - - - ( 31 )
The differentiate of convolutional network parameter:
The parameter that can learn in convolutional network only has the convolution kernel in convolution operation, therefore, needs to calculate J (Θ i) about the renewal gradient of each level convolution kernel c.Can have according to chain type Rule for derivation:
▿ c k ( i , j ) = ∂ J ( θ ) ∂ c k ( i , j ) = ∂ J ( θ ) ∂ M C · ∂ M C ∂ c k ( i , j ) = ∂ J ( θ ) ∂ M C k · ∂ M C k ∂ c k ( i , j ) = s u m ( ∂ J ( θ ) ∂ M C k · I ( s c - i + 1 : M I + i - s c , s c - j + 1 : N I + j - s c ) ) . - - - ( 32 )
Wherein,
∂ J ( θ ) ∂ M C = ∂ J ( θ ) ∂ M A · ∂ M A ∂ M C = ∂ J ( θ ) ∂ M A · ∂ Re L U ( M C ) ∂ M C = ∂ J ( θ ) ∂ M A · ( M C > 0 ) . - - - ( 33 )
∂ J ( θ ) ∂ M A = u p s a m p l e ( ∂ J ( θ ) ∂ M S ) = k r o n ( ∂ J ( θ ) ∂ M S , E ( s s , s s ) ) · l o c a t i o n ( M S ) . - - - ( 34 )
E ( m , n ) = 1 ... 1 ... ... ... 1 ... 1 m × n . - - - ( 35 )
k r o n ( A , B ) = a 11 B a 12 B ... a 1 N B a 21 B a 21 B ... a 2 N B ... ... ... ... a M 1 B a M 1 B ... a MN B , A ∈ R M × N . - - - ( 36 )
Location represents M svalue at M ain the binaryzation matrix of position, that is:
∂ J ( θ ) ∂ M S = r e s h a p e ( ∂ J ( θ ) ∂ f e a , s i z e ( M S ) ) . - - - ( 38 )
The above-mentioned principles and methods for utilizing the accumulation of multi-layer Fusion Features tax power degree of depth convolutional network to carry out the process of feature learning, provides concrete algorithm below, as shown in table 1:
Table 1 weighs for utilizing the accumulation of multi-layer Fusion Features to compose the process that degree of depth convolutional network carries out feature learning.
Just can carry out verification process of the present invention below:
(1) Image semantic classification
The present invention adopts the Face datection algorithm based on cascade Adaboost to carry out Face datection to image, then the facial modeling algorithm based on SDM is utilized to carry out positioning feature point to the face detected, and by image scaling, rotation and translation, face is corrected and normalization alignment, finally obtain the facial image being of a size of 100*100, and in the picture, the image coordinate of left eye is (30,30), the image coordinate of right eye is (30,70), as shown in Figure 3.
The present invention adopts simple gray scale normalization pre-service, and as shown in the formula (1), in formula, I (i, j) represents the gray-scale value of image (i, j).The fundamental purpose of gray scale normalization is convenient to network processes continuous data and avoids processing larger discrete grey value, thus avoid occurring abnormal conditions.
I ( i , j ) = I ( i , j ) 256 . - - - ( 1 )
(2) feature extraction
Utilize the network extraction characteristics of image of having trained
Complete after the training of power degree of depth convolutional network is composed to the accumulation of multi-layer Fusion Features, just can utilize the feature of the network extraction input picture trained, as shown in table 2:
(3) aspect ratio pair
(I) absolute value normalization cosine value
Absolute value normalization cosine value (cosinenormalizedbyabsolutevalue, cos that the present invention proposes aN) define such as formula (39):
cos A N ( x , y ) = c o s ( x ^ , y ^ ) - - - ( 39 )
Wherein,
x ^ i = x i | x i | + | y y | , y ^ i = y i | x i | + | y i | . - - - ( 40 )
Experiment shows, the contrast of absolute value normalization cosine value is responsive to the long difference of vector field homoemorphism, can make up that cosine similarity ignores the long difference of vector field homoemorphism and the difference that causes describes incomplete problem.
(II) the many differences based on LDA merge alignment algorithm
The cosine similarity of comparison feature, absolute value normalization cosine value and two character modules length are combined as a four-dimensional difference vector by the present invention, namely
f diff(f T1,f T2)=[cos(f T1,f T2),cos AN(f T2,f T2),|f T2|,|f T2|] T(41)
Then utilize LDA (linear discriminant analysis) that four-dimensional difference vector is fused to one dimension analog quantity, namely, difference vector mapping matrix W lDAfour-dimensional difference vector is needed to be mapped as one-dimensional vector.
sim(f T1,f T2)=W LDAf diff(42)
Wherein W lDArepresent the mapping vector utilizing LDA to obtain.
The beneficial effect that embodiment of the present invention technical scheme is brought:
The present embodiment utilizes the accumulated weights degree of depth convolutional network of multi-layer Fusion Features to carry out feature learning and feature extraction, then utilize the many differences based on LDA to merge the feature of alignment algorithm to two width facial images to compare, there are following five advantages: one, the present invention utilizes convolutional network automatic learning and extracts feature, avoids the deficiency of manual features; Two, gradient disperse problem is avoided by the training of multistratum classification network association; Three, by multi-layer Fusion Features, increase characteristics of image richness, compensate for general degree of depth network insufficient to each hierarchy characteristic process, only utilize high-level characteristic to be not enough to the defect of abundant Description Image; Four, by increasing the flexibility ratio of e-learning to the weighting of multistratum classification network error; Five, by solving cosine similarity based on many differences fusion alignment algorithms of LDA, incomplete problem is portrayed to proper vector difference.FERET database is tested, and four word bank Fb, Fc, DupI, DupII achieve 99.9% respectively, 100%, 98.8%, the certification rate (it is 0.1% that mistake is sentenced rate) of 99.6%.
The above is the preferred embodiment of the present invention; it should be pointed out that for those skilled in the art, under the prerequisite not departing from principle of the present invention; can also make some improvements and modifications, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (10)

1. a method for face authentication, is characterized in that, comprising:
The multi-layer degree of depth convolutional network of training through multistratum classification network association is in advance used to extract the proper vector of multiple level successively to facial image to be certified and facial image template;
The proper vector of multiple level is mapped as unified dimensional proper vector by unified dimensional linear mapping matrix successively;
Unified dimensional proper vector is connected into union feature vector;
Union feature vector is carried out dimensionality reduction mapping by linear dimensionality reduction mapping matrix and obtains multi-feature vector;
By linear discriminant analysis, utilize absolute value normalization cosine value, to the certification of comparing of the multi-feature vector of the facial image to be certified obtained and the multi-feature vector of facial image template.
2. the method for face authentication according to claim 1, it is characterized in that, described use facial image to be certified and facial image template also comprised in advance before the multi-layer degree of depth convolutional network of multistratum classification network association training extracts the proper vector of multiple level successively:
Carry out pre-service to facial image to be certified and facial image template, described pre-service comprises positioning feature point, image rectification and normalized.
3. the method for face authentication according to claim 1, is characterized in that, the proper vector of each level calculates as follows:
Use convolution kernel to carry out convolution operation to facial image to be certified and facial image template, obtain convolution characteristic pattern, described convolution operation is same convolution operation;
Use activation function to carry out activation manipulation to described convolution characteristic pattern, obtain activating characteristic pattern, described activation function is ReLU activation function;
Use sampling function to carry out down-sampling operation to described activation characteristic pattern, obtain characteristic pattern of sampling, described down-sampling is operating as maximal value sampling;
Above-mentioned steps is repeated to the sampling characteristic pattern obtained, obtains new sampling characteristic pattern, and so repeat several times;
The all sampling characteristic patterns obtained are carried out vectorization, obtains the proper vector of each level.
4., according to the method for described face authentication arbitrary in claim 1-3, it is characterized in that, described multi-layer degree of depth convolutional network is obtained by the joint training of softmax sorter network, and training step comprises:
Initialized multi-layer degree of depth convolutional network is used to extract the proper vector of multiple level successively to facial image sample;
The proper vector of multiple level is mapped as successively the unified dimensional proper vector of same dimension by unified dimensional linear mapping matrix;
In softmax sorter network, use linear mapping matrix to map unified dimensional proper vector respectively, obtain mapping vector;
Use softmax function mapping directive amount to activate, obtain network output valve vector;
With the label data of network output valve vector sum facial image sample for input quantity, calculate network error by cross entropy loss function;
Each unified dimensional proper vector is connected into a union feature vector;
Described union feature vector is carried out dimensionality reduction mapping by linear dimensionality reduction mapping matrix and obtains multi-feature vector;
For described network error assigns weight, and calculate the renewal gradient of described linear mapping matrix, unified dimensional linear mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel;
The renewal gradient of described linear mapping matrix, unified dimensional linear mapping matrix, linearly dimensionality reduction mapping matrix and convolution kernel is utilized to carry out iteration renewal to described linear mapping matrix, unified dimensional linear mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel;
Judge whether network error and iterations meet the requirements, and if so, terminate, otherwise, go to and describedly use initialized multi-layer degree of depth convolutional network to extract the proper vector of multiple level successively to facial image sample.
5. according to the method for described face authentication arbitrary in claim 1-3, it is characterized in that, describedly pass through linear discriminant analysis, utilize absolute value normalization cosine value, the certification of comparing of the multi-feature vector of the facial image to be certified obtained and the multi-feature vector of facial image template comprised:
Carry out cosine similarity operation with the multi-feature vector of the multi-feature vector of the facial image to be certified obtained and facial image template for input quantity, obtain cosine similarity;
Carry out the operation of absolute value normalization cosine with the multi-feature vector of the multi-feature vector of the facial image to be certified obtained and facial image template for input quantity, obtain absolute value normalization cosine value;
Modulo operation is asked to the multi-feature vector of the facial image to be certified obtained and the multi-feature vector of facial image template, obtains the first mould long long with the second mould;
By described cosine similarity, absolute value normalization cosine value, the long difference vector four-dimensional with the long composition of the second mould one of the first mould;
Described difference vector maps by usage variance DUAL PROBLEMS OF VECTOR MAPPING matrix, obtains one-dimensional vector, as comparison score value;
Comparison score value and comparison threshold value are compared, if comparison score value is greater than comparison threshold value, then face authentication passes through.
6. a device for face authentication, is characterized in that, comprising:
First extraction module, for using the multi-layer degree of depth convolutional network of training through multistratum classification network association in advance to extract the proper vector of multiple level successively to facial image to be certified and facial image template;
First mapping block, for being mapped as unified dimensional proper vector by unified dimensional linear mapping matrix successively by the proper vector of multiple level;
First serial module structure, for being connected into union feature vector by unified dimensional proper vector;
Second mapping block, obtains multi-feature vector for union feature vector is carried out dimensionality reduction mapping by linear dimensionality reduction mapping matrix;
First comparing module, for by linear discriminant analysis, utilizes absolute value normalization cosine value, to the certification of comparing of the multi-feature vector of the facial image to be certified obtained and the multi-feature vector of facial image template.
7. the device of face authentication according to claim 6, is characterized in that, also comprises before described first extraction module:
Pretreatment module, for carrying out pre-service to facial image to be certified and facial image template, described pre-service comprises positioning feature point, image rectification and normalized.
8. the device of face authentication according to claim 6, is characterized in that, the proper vector of each level is calculated by such as lower unit:
Convolution unit, for using convolution kernel to carry out convolution operation to facial image to be certified and facial image template, obtain convolution characteristic pattern, described convolution operation is same convolution operation;
Activate unit, for using activation function to carry out activation manipulation to described convolution characteristic pattern, obtain activating characteristic pattern, described activation function is ReLU activation function;
Sampling unit, for using sampling function to carry out down-sampling operation to described activation characteristic pattern, obtains characteristic pattern of sampling, and described down-sampling is operating as maximal value sampling;
Cycling element, for repeating above-mentioned steps to the sampling characteristic pattern obtained, obtaining new sampling characteristic pattern, and so repeating several times;
Primary vector unit, for all sampling characteristic patterns obtained are carried out vectorization, obtains the proper vector of each level.
9., according to the device of described face authentication arbitrary in claim 6-8, it is characterized in that, described multi-layer degree of depth convolutional network is obtained by the joint training of softmax sorter network, comprising:
Second extraction module, extracts the proper vector of multiple level successively for using initialized multi-layer degree of depth convolutional network to facial image sample;
3rd mapping block, for being mapped as the unified dimensional proper vector of same dimension successively by unified dimensional linear mapping matrix by the proper vector of multiple level;
4th mapping block, for using linear mapping matrix to map unified dimensional proper vector respectively in softmax sorter network, obtains mapping vector;
Active module, for using softmax function mapping directive amount to activate, obtains network output valve vector;
First computing module, for the label data of network output valve vector sum facial image sample for input quantity, calculate network error by cross entropy loss function;
Second serial module structure, for being connected into a union feature vector by each unified dimensional proper vector;
5th mapping block, obtains multi-feature vector for described union feature vector is carried out dimensionality reduction mapping by linear dimensionality reduction mapping matrix;
Second computing module, for assigning weight for described network error, and calculates the renewal gradient of described linear mapping matrix, unified dimensional linear mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel;
Update module, carries out iteration renewal for utilizing the renewal gradient of described linear mapping matrix, unified dimensional linear mapping matrix, linearly dimensionality reduction mapping matrix and convolution kernel to described linear mapping matrix, unified dimensional linear mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel;
Judge module, for judging whether network error and iterations meet the requirements, and if so, terminate, otherwise, go to described second extraction module.
10., according to the device of described face authentication arbitrary in claim 6-8, it is characterized in that, described first comparing module comprises:
First computing unit, for carrying out cosine similarity operation with the multi-feature vector of the multi-feature vector of the facial image to be certified obtained and facial image template for input quantity, obtains cosine similarity;
Second computing unit, for carrying out the operation of absolute value normalization cosine with the multi-feature vector of the multi-feature vector of the facial image to be certified obtained and facial image template for input quantity, obtains absolute value normalization cosine value;
3rd computing unit, for asking modulo operation to the multi-feature vector of the facial image to be certified obtained and the multi-feature vector of facial image template, obtains the first mould long long with the second mould;
Secondary vector unit, for growing the difference vector four-dimensional with the long composition of the second mould one by described cosine similarity, absolute value normalization cosine value, the first mould;
Map unit, maps described difference vector for usage variance DUAL PROBLEMS OF VECTOR MAPPING matrix, obtains one-dimensional vector, as comparison score value;
Comparing unit, for comparison score value and comparison threshold value being compared, if comparison score value is greater than comparison threshold value, then face authentication passes through.
CN201510490244.7A 2015-08-11 2015-08-11 The method and apparatus of face authentication Active CN105138973B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510490244.7A CN105138973B (en) 2015-08-11 2015-08-11 The method and apparatus of face authentication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510490244.7A CN105138973B (en) 2015-08-11 2015-08-11 The method and apparatus of face authentication

Publications (2)

Publication Number Publication Date
CN105138973A true CN105138973A (en) 2015-12-09
CN105138973B CN105138973B (en) 2018-11-09

Family

ID=54724317

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510490244.7A Active CN105138973B (en) 2015-08-11 2015-08-11 The method and apparatus of face authentication

Country Status (1)

Country Link
CN (1) CN105138973B (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740808A (en) * 2016-01-28 2016-07-06 北京旷视科技有限公司 Human face identification method and device
CN106022215A (en) * 2016-05-05 2016-10-12 北京海鑫科金高科技股份有限公司 Face feature point positioning method and device
CN106067096A (en) * 2016-06-24 2016-11-02 北京邮电大学 A kind of data processing method, Apparatus and system
CN106407982A (en) * 2016-09-23 2017-02-15 厦门中控生物识别信息技术有限公司 Data processing method and equipment
CN106503669A (en) * 2016-11-02 2017-03-15 重庆中科云丛科技有限公司 A kind of based on the training of multitask deep learning network, recognition methods and system
CN106934373A (en) * 2017-03-14 2017-07-07 重庆文理学院 A kind of library book damages assessment method and system
CN106960185A (en) * 2017-03-10 2017-07-18 陕西师范大学 The Pose-varied face recognition method of linear discriminant depth belief network
CN107066934A (en) * 2017-01-23 2017-08-18 华东交通大学 Tumor stomach cell image recognition decision maker, method and tumor stomach section identification decision equipment
CN107133220A (en) * 2017-06-07 2017-09-05 东南大学 Name entity recognition method in a kind of Geography field
CN107622282A (en) * 2017-09-21 2018-01-23 百度在线网络技术(北京)有限公司 Image verification method and apparatus
CN107871101A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN108475331A (en) * 2016-02-17 2018-08-31 英特尔公司 Use the candidate region for the image-region for including interested object of multiple layers of the characteristic spectrum from convolutional neural networks model
CN108628868A (en) * 2017-03-16 2018-10-09 北京京东尚科信息技术有限公司 File classification method and device
CN108764207A (en) * 2018-06-07 2018-11-06 厦门大学 A kind of facial expression recognizing method based on multitask convolutional neural networks
CN109885578A (en) * 2019-03-12 2019-06-14 西北工业大学 Data processing method, device, equipment and storage medium
CN109886335A (en) * 2019-02-21 2019-06-14 厦门美图之家科技有限公司 Disaggregated model training method and device
CN110793525A (en) * 2019-11-12 2020-02-14 深圳创维数字技术有限公司 Vehicle positioning method, apparatus and computer-readable storage medium
TWI689285B (en) * 2018-11-15 2020-04-01 國立雲林科技大學 Facial symmetry detection method and system thereof
CN111626889A (en) * 2020-06-02 2020-09-04 小红书科技有限公司 Method and device for predicting categories corresponding to social content
US10846518B2 (en) 2018-11-28 2020-11-24 National Yunlin University Of Science And Technology Facial stroking detection method and system thereof
TWI727548B (en) * 2019-03-22 2021-05-11 大陸商北京市商湯科技開發有限公司 Method for face recognition and device thereof
CN113158908A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Face recognition method and device, storage medium and electronic equipment
CN114359034A (en) * 2021-12-24 2022-04-15 北京航空航天大学 Method and system for generating face picture based on hand drawing
WO2022263452A1 (en) 2021-06-15 2022-12-22 Trinamix Gmbh Method for authenticating a user of a mobile device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079103A (en) * 2007-06-14 2007-11-28 上海交通大学 Human face posture identification method based on sparse Bayesian regression
US20080144941A1 (en) * 2006-12-18 2008-06-19 Sony Corporation Face recognition apparatus, face recognition method, gabor filter application apparatus, and computer program
CN104200224A (en) * 2014-08-28 2014-12-10 西北工业大学 Valueless image removing method based on deep convolutional neural networks
CN104268524A (en) * 2014-09-24 2015-01-07 朱毅 Convolutional neural network image recognition method based on dynamic adjustment of training targets
CN104616032A (en) * 2015-01-30 2015-05-13 浙江工商大学 Multi-camera system target matching method based on deep-convolution neural network
WO2015101080A1 (en) * 2013-12-31 2015-07-09 北京天诚盛业科技有限公司 Face authentication method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080144941A1 (en) * 2006-12-18 2008-06-19 Sony Corporation Face recognition apparatus, face recognition method, gabor filter application apparatus, and computer program
CN101079103A (en) * 2007-06-14 2007-11-28 上海交通大学 Human face posture identification method based on sparse Bayesian regression
WO2015101080A1 (en) * 2013-12-31 2015-07-09 北京天诚盛业科技有限公司 Face authentication method and device
CN104200224A (en) * 2014-08-28 2014-12-10 西北工业大学 Valueless image removing method based on deep convolutional neural networks
CN104268524A (en) * 2014-09-24 2015-01-07 朱毅 Convolutional neural network image recognition method based on dynamic adjustment of training targets
CN104616032A (en) * 2015-01-30 2015-05-13 浙江工商大学 Multi-camera system target matching method based on deep-convolution neural network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
许可: "卷积神经网络在图像识别上的应用的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105740808A (en) * 2016-01-28 2016-07-06 北京旷视科技有限公司 Human face identification method and device
CN105740808B (en) * 2016-01-28 2019-08-09 北京旷视科技有限公司 Face identification method and device
US11244191B2 (en) 2016-02-17 2022-02-08 Intel Corporation Region proposal for image regions that include objects of interest using feature maps from multiple layers of a convolutional neural network model
CN108475331B (en) * 2016-02-17 2022-04-05 英特尔公司 Method, apparatus, system and computer readable medium for object detection
CN108475331A (en) * 2016-02-17 2018-08-31 英特尔公司 Use the candidate region for the image-region for including interested object of multiple layers of the characteristic spectrum from convolutional neural networks model
CN106022215A (en) * 2016-05-05 2016-10-12 北京海鑫科金高科技股份有限公司 Face feature point positioning method and device
CN106022215B (en) * 2016-05-05 2019-05-03 北京海鑫科金高科技股份有限公司 Man face characteristic point positioning method and device
CN106067096A (en) * 2016-06-24 2016-11-02 北京邮电大学 A kind of data processing method, Apparatus and system
CN106067096B (en) * 2016-06-24 2019-09-17 北京邮电大学 A kind of data processing method, apparatus and system
CN106407982B (en) * 2016-09-23 2019-05-14 厦门中控智慧信息技术有限公司 A kind of data processing method and equipment
CN107871101A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN106407982A (en) * 2016-09-23 2017-02-15 厦门中控生物识别信息技术有限公司 Data processing method and equipment
CN106503669B (en) * 2016-11-02 2019-12-10 重庆中科云丛科技有限公司 Training and recognition method and system based on multitask deep learning network
CN106503669A (en) * 2016-11-02 2017-03-15 重庆中科云丛科技有限公司 A kind of based on the training of multitask deep learning network, recognition methods and system
CN107066934A (en) * 2017-01-23 2017-08-18 华东交通大学 Tumor stomach cell image recognition decision maker, method and tumor stomach section identification decision equipment
CN106960185A (en) * 2017-03-10 2017-07-18 陕西师范大学 The Pose-varied face recognition method of linear discriminant depth belief network
CN106960185B (en) * 2017-03-10 2019-10-25 陕西师范大学 The Pose-varied face recognition method of linear discriminant deepness belief network
CN106934373A (en) * 2017-03-14 2017-07-07 重庆文理学院 A kind of library book damages assessment method and system
CN108628868A (en) * 2017-03-16 2018-10-09 北京京东尚科信息技术有限公司 File classification method and device
CN107133220A (en) * 2017-06-07 2017-09-05 东南大学 Name entity recognition method in a kind of Geography field
CN107133220B (en) * 2017-06-07 2020-11-24 东南大学 Geographic science field named entity identification method
CN107622282A (en) * 2017-09-21 2018-01-23 百度在线网络技术(北京)有限公司 Image verification method and apparatus
CN108764207B (en) * 2018-06-07 2021-10-19 厦门大学 Face expression recognition method based on multitask convolutional neural network
CN108764207A (en) * 2018-06-07 2018-11-06 厦门大学 A kind of facial expression recognizing method based on multitask convolutional neural networks
TWI689285B (en) * 2018-11-15 2020-04-01 國立雲林科技大學 Facial symmetry detection method and system thereof
US10846518B2 (en) 2018-11-28 2020-11-24 National Yunlin University Of Science And Technology Facial stroking detection method and system thereof
CN109886335A (en) * 2019-02-21 2019-06-14 厦门美图之家科技有限公司 Disaggregated model training method and device
CN109885578A (en) * 2019-03-12 2019-06-14 西北工业大学 Data processing method, device, equipment and storage medium
CN109885578B (en) * 2019-03-12 2021-08-13 西北工业大学 Data processing method, device, equipment and storage medium
TWI727548B (en) * 2019-03-22 2021-05-11 大陸商北京市商湯科技開發有限公司 Method for face recognition and device thereof
CN110793525A (en) * 2019-11-12 2020-02-14 深圳创维数字技术有限公司 Vehicle positioning method, apparatus and computer-readable storage medium
CN111626889A (en) * 2020-06-02 2020-09-04 小红书科技有限公司 Method and device for predicting categories corresponding to social content
CN113158908A (en) * 2021-04-25 2021-07-23 北京华捷艾米科技有限公司 Face recognition method and device, storage medium and electronic equipment
WO2022263452A1 (en) 2021-06-15 2022-12-22 Trinamix Gmbh Method for authenticating a user of a mobile device
CN114359034A (en) * 2021-12-24 2022-04-15 北京航空航天大学 Method and system for generating face picture based on hand drawing
CN114359034B (en) * 2021-12-24 2023-08-08 北京航空航天大学 Face picture generation method and system based on hand drawing

Also Published As

Publication number Publication date
CN105138973B (en) 2018-11-09

Similar Documents

Publication Publication Date Title
CN105138973A (en) Face authentication method and device
CN110490946B (en) Text image generation method based on cross-modal similarity and antagonism network generation
CN110321967B (en) Image classification improvement method based on convolutional neural network
CN105447473A (en) PCANet-CNN-based arbitrary attitude facial expression recognition method
CN111414862B (en) Expression recognition method based on neural network fusion key point angle change
CN104866810A (en) Face recognition method of deep convolutional neural network
CN105844669A (en) Video target real-time tracking method based on partial Hash features
CN104463209A (en) Method for recognizing digital code on PCB based on BP neural network
CN105760821A (en) Classification and aggregation sparse representation face identification method based on nuclear space
CN107657204A (en) The construction method and facial expression recognizing method and system of deep layer network model
CN105184298A (en) Image classification method through fast and locality-constrained low-rank coding process
CN111339988A (en) Video face recognition method based on dynamic interval loss function and probability characteristic
CN115294407A (en) Model compression method and system based on preview mechanism knowledge distillation
Banerjee et al. A new wrapper feature selection method for language-invariant offline signature verification
CN104463194A (en) Driver-vehicle classification method and device
CN105740908A (en) Classifier design method based on kernel space self-explanatory sparse representation
CN108052959A (en) A kind of method for improving deep learning picture recognition algorithm robustness
CN105631478A (en) Plant classification method based on sparse expression dictionary learning
CN105354532A (en) Hand motion frame data based gesture identification method
CN114398976A (en) Machine reading understanding method based on BERT and gate control type attention enhancement network
Huang et al. Design and Application of Face Recognition Algorithm Based on Improved Backpropagation Neural Network.
CN108520201A (en) A kind of robust human face recognition methods returned based on weighted blend norm
CN111914553A (en) Financial information negative subject judgment method based on machine learning
Ye et al. A joint-training two-stage method for remote sensing image captioning
Wang Research on handwritten note recognition in digital music classroom based on deep learning

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 100085, 1 floor 8, 1 Street, ten Street, Haidian District, Beijing.

Patentee after: Beijing Eyes Intelligent Technology Co.,Ltd.

Address before: 100085, 1 floor 8, 1 Street, ten Street, Haidian District, Beijing.

Patentee before: BEIJING TECHSHINO TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220401

Address after: 071800 Beijing Tianjin talent home (Xincheng community), West District, Xiongxian Economic Development Zone, Baoding City, Hebei Province

Patentee after: BEIJING EYECOOL TECHNOLOGY Co.,Ltd.

Patentee after: Beijing Eyes Intelligent Technology Co.,Ltd.

Address before: 100085, 1 floor 8, 1 Street, ten Street, Haidian District, Beijing.

Patentee before: Beijing Eyes Intelligent Technology Co.,Ltd.

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Method and device for face authentication

Effective date of registration: 20220614

Granted publication date: 20181109

Pledgee: China Construction Bank Corporation Xiongxian sub branch

Pledgor: BEIJING EYECOOL TECHNOLOGY Co.,Ltd.

Registration number: Y2022990000332