Specific implementation mode
To keep the technical problem to be solved in the present invention, technical solution and advantage clearer, below in conjunction with attached drawing and tool
Body embodiment is described in detail.
On the one hand, a kind of method of face authentication of the present invention, as shown in Figure 1, including:
Step S101:Facial image to be certified and facial image template are instructed using the pre- multistratum classification network association that first passes through
Experienced multi-layer depth convolutional network extracts the feature vector of multiple levels successively;
Multi-layer depth convolutional network includes 2 or more convolutional networks, each convolutional network include convolution, activation and under
Sampling operation, the sequence and quantity of these operations are not fixed, are determined according to actual conditions;Each convolutional network of the present invention is equal
A feature vector is extracted, fea can be denoted as1,fea2, fea3... (feature vector of 1 group of multiple level is only listed here,
The feature vector of multiple levels of facial image i.e. to be certified or facial image template, following formula also only write out an image
Formula), the input of first convolutional network is facial image to be certified or facial image template, the latter convolutional network it is defeated
Enter the characteristic pattern after being operated for previous convolutional network;
General depth network has that gradient disperse, multi-layer depth convolutional network of the invention pass through multilayer point
Class network carries out joint training and obtains, and avoids the above problem.
Step S102:The feature vector of multiple levels is passed sequentially through unified dimensional Linear Mapping matrix to be mapped as uniformly tieing up
Spend feature vector;Unified dimensional Linear Mapping matrix is obtained by training in advance, can be denoted as W1,W2,W3..., it is unified to tie up
Degree feature vector can be denoted as f1,f2,f3,…。
Step S103:Unified dimensional feature vector is connected into union feature vector;Feature can be denoted asmerge。
Step S104:By union feature vector by linear dimensionality reduction mapping matrix carry out dimensionality reduction map to obtain comprehensive characteristics to
Amount;Linear dimensionality reduction mapping matrix is obtained by training in advance, can be denoted as WT, multi-feature vector can be denoted as fT。
Step S105:By linear discriminant analysis, cosine value is normalized using absolute value, to obtained face figure to be certified
Certification is compared in the multi-feature vector of picture and the multi-feature vector of facial image template.
In the method for the face authentication of the present invention, first using the multi-layer for first passing through the training of multistratum classification network association in advance
Depth convolutional network extracts the feature vector of multiple levels of facial image and facial image template to be certified, then by multiple layers
The feature vector of grade passes sequentially through unified dimensional Linear Mapping matrix and is mapped as unified dimensional feature vector, then unified dimensional is special
Sign vector is connected into union feature vector, and union feature vector is carried out dimensionality reduction by linear dimensionality reduction mapping matrix and maps to obtain
Multi-feature vector normalizes cosine value, to obtained face figure to be certified finally by linear discriminant analysis using absolute value
Certification is compared in the multi-feature vector of picture and the multi-feature vector of facial image template.
Compared with prior art, the present invention learns and extracts feature automatically by multi-layer depth convolutional network, and existing
Engineer goes out a feature vector and compares in technology, and anti-interference energy is strong, and scalability is good, and certification accuracy rate is high.
The multi-layer depth convolutional network of the present invention carries out joint training by multistratum classification network and obtains, and avoids gradient
Disperse problem, certification accuracy rate are high.
And the feature vector of multiple levels is merged, increases characteristics of image richness, compensates for general depth network
Defect insufficient, that description image is not sufficient enough to merely with high-level characteristic is handled to each hierarchy characteristic;It further improves and recognizes
Demonstrate,prove accuracy rate.
Inventor also found that it is long to have ignored vector field homoemorphism for traditional comparison authentication method, especially cosine similarity method
Difference, to make a difference, description is not comprehensive, reduces the accuracy rate for comparing certification;The present invention uses linear discriminant analysis, right
Multiple difference characteristics including absolute value normalization cosine value are compared, and it is accurate further to improve certification
Rate.
Therefore the method strong antijamming capability of the face authentication of the present invention, scalability is good, and certification accuracy rate is high, and avoids
Gradient disperse problem makes up the defect that description image is not sufficient enough to using high-level characteristic.
A kind of improvement of the method for face authentication as the present invention, step S101 further include before:
Step S100:Facial image to be certified and facial image template are pre-processed, pretreatment includes that characteristic point is fixed
Position, image rectification and normalized.In fact, facial image template may cross pretreatment through prior, it can be without this
Step.
The present invention uses the Face datection algorithm based on cascade Adaboost to carry out Face datection to image, then utilizes base
Positioning feature point carried out to the face that detected in the facial modeling algorithm of SDM, and by image scaling, rotation with
Alignment is corrected face and is normalized in translation, as shown in Figure 3.
The present invention is pre-processed using simple gray scale normalization, and the main purpose of gray scale normalization is easy for network processes company
Continue data and avoid handling larger discrete grey's value, to avoid the occurrence of abnormal conditions.
The present invention, which pre-processes facial image, can facilitate subsequent verification process, and avoid extraordinary image vegetarian refreshments
Influence to authentication result.
The another of the method for face authentication as the present invention improves, and each convolutional network includes convolution operation, activation
Operation and down-sampling operation, the feature vector of each level are calculated as follows:
Step S1011:Convolution operation is carried out to facial image to be certified and facial image template using convolution kernel, is rolled up
Product characteristic pattern, convolution operation are same convolution operations;
The present invention uses the convolution operation of same forms, carries out zero padding to input picture when operation.The volume of same forms
It is identical as input picture full size that product operates obtained characteristic pattern.
Step S1012:Convolution characteristic pattern is operated into line activating using activation primitive, activation characteristic pattern is obtained, activates letter
Number is ReLU activation primitives.
Step S1013:Using sampling function to activation characteristic pattern carry out down-sampling operation, obtain sampling characteristic pattern, under adopt
Sample operation is that maximum value samples;
The present invention is sampled using maximum value, and maximum value is sampled using the maximum value of sampling block interior element value as the spy of sampling block
Sign, in image procossing, maximum value samples the texture information that can extract image, and maintains certain of image to a certain extent
Kind invariance, such as rotation, translation, scaling;In addition, tested according to statistics, for comparing average sample, maximum value sampling pair
Data distribution variation is insensitive, and feature extraction is stablized relatively.
Step S1014:It repeats the above steps to obtained sampling characteristic pattern, obtains new sampling characteristic pattern, and so heavy
It is multiple to execute several times;
Step S1015:Obtained all sampling characteristic patterns are subjected to vectorization, obtain the feature vector of each level, it will
All sampling characteristic patterns that each step obtains form a vector.
The present invention can extract the feature vector of feature rich and stabilization, can adequately describe facial image,
Increase certification accuracy rate.
Another improvement of the method for face authentication as the present invention, multi-layer depth convolutional network pass through softmax
Sorter network joint training obtains, including:
When training, facial image sample database has been first had to, it is then deep using the multi-layer of initialization to facial image sample
Degree convolutional network extracts the feature vector of multiple levels successively;It is the same with aforementioned step S101, is above only
Verification process is training process here, and the parameters in multi-layer depth convolutional network at this time take initial value;
The feature vector of multiple levels is passed sequentially through into the unification that unified dimensional Linear Mapping matrix is mapped as same dimension
Dimensional characteristics vector;
Unified dimensional feature vector is mapped respectively using Linear Mapping matrix in softmax sorter networks, is obtained
To map vector;Linear Mapping matrix at this time takes initial value;
Using softmax function pairs map vector into line activating, network output valve vector is obtained;
Using the label data of network output valve vector sum facial image sample as input quantity, pass through cross entropy loss function meter
Calculate network error;
Each unified dimensional feature vector is connected into a union feature vector;
Union feature vector is carried out dimensionality reduction by linear dimensionality reduction mapping matrix to map to obtain multi-feature vector;
Weight is distributed for network error, and calculates Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear dimensionality reduction and reflects
Penetrate the update gradient of matrix and convolution kernel;
Utilize the update of Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel
Gradient is iterated update to Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel;
Judge whether network error and iterations meet the requirements, if so, terminating, otherwise, goes to facial image sample
Extract the feature vector of multiple levels successively using the multi-layer depth convolutional network of initialization.
It refers to that network error value is minimum (or small to a certain extent) that network error, which meets the requirements, at this time multi-layer depth
Parameters (Linear Mapping matrix, unified dimensional Linear Mapping matrix, the linear drop of convolutional network and softmax sorter networks
Tie up mapping matrix and convolution kernel) it is multi-layer depth convolutional network and softmax sorter networks after training;Iterations meet
It is required that referring to that iterations reach setting value.
The present invention carries out joint training by softmax sorter networks, further avoids gradient disperse problem, and can
Further to increase the flexibility ratio of e-learning by being weighted to sorter network error.
Another improvement of the method for face authentication as the present invention, step S105 include:
Step S1051:With the comprehensive spy of the multi-feature vector and facial image template of obtained facial image to be certified
Sign vector is that input quantity carries out cosine similarity operation, obtains cosine similarity;
Step S1052:With the comprehensive spy of the multi-feature vector and facial image template of obtained facial image to be certified
Sign vector is that input quantity carries out absolute value normalization cosine operation, obtains absolute value normalization cosine value;
Step S1053:To the comprehensive spy of the multi-feature vector and facial image template of obtained facial image to be certified
Sign vector carries out modulus operation, and it is long with the second mould to obtain the first mould length;
Step S1054:Cosine similarity, absolute value normalization cosine value, the first mould length are formed one with the second mould length
Four-dimensional difference vector;
Step S1055:Difference vector is mapped using difference vector mapping matrix, obtains one-dimensional vector, is divided as comparing
Value;
Step S1056:Score value will be compared to be compared with threshold value is compared, threshold value, face are compared if comparing score value and being more than
Certification passes through.
Inventor has found that it is poor to have ignored vector field homoemorphism length for traditional comparison authentication method, especially cosine similarity method
Different, to make a difference, description is not comprehensive, reduces the accuracy rate for comparing certification;Absolute value normalizes cosine value comparison to vector
The long difference of mould it is sensitive, can make up cosine similarity ignore the long difference of vector field homoemorphism and caused by difference incomplete ask is described
Topic.
Therefore the present invention will compare the cosine similarity, absolute value normalization cosine value and two character modules length combinations of feature
For a four-dimensional difference vector, linear discriminant analysis is carried out, certification accuracy rate is further improved.
On the other hand, the present invention provides a kind of device of face authentication, as shown in right figure 2, including:
First extraction module 11, for facial image to be certified and facial image template using first passing through multistratum classification in advance
The multi-layer depth convolutional network of network association training extracts the feature vector of multiple levels successively;
First mapping block 12 is reflected for the feature vector of multiple levels to be passed sequentially through unified dimensional Linear Mapping matrix
It penetrates as unified dimensional feature vector;
First serial module structure 13, for unified dimensional feature vector to be connected into union feature vector;
Second mapping block 14 maps to obtain for union feature vector to be carried out dimensionality reduction by linear dimensionality reduction mapping matrix
Multi-feature vector;
First comparing module 15, for by linear discriminant analysis, normalizing cosine value using absolute value, waiting for what is obtained
Certification is compared in the multi-feature vector of certification facial image and the multi-feature vector of facial image template.
The device strong antijamming capability of the face authentication of the present invention, scalability is good, and certification accuracy rate is high, and avoids
Gradient disperse problem makes up the defect that description image is not sufficient enough to using high-level characteristic.
A kind of improvement of the device of face authentication as the present invention, the first extraction module further include before:
Preprocessing module, for being pre-processed to facial image to be certified and facial image template, pretreatment includes special
Levy point location, image rectification and normalized.
The present invention, which pre-processes facial image, can facilitate subsequent verification process, and avoid extraordinary image vegetarian refreshments
Influence to authentication result.
The another of the device of face authentication as the present invention improves, and the feature vector of each level passes through such as lower unit
It is calculated:
Convolution unit is obtained for carrying out convolution operation to facial image to be certified and facial image template using convolution kernel
To convolution characteristic pattern, convolution operation is same convolution operations;
Unit is activated, for being operated into line activating to convolution characteristic pattern using activation primitive, obtains activation characteristic pattern, activation
Function is ReLU activation primitives;
Sampling unit, for, to activation characteristic pattern progress down-sampling operation, obtaining sampling characteristic pattern using sampling function, under
Sampling operation samples for maximum value;
Cycling element obtains new sampling characteristic pattern, and so for repeating the above steps to obtained sampling characteristic pattern
It repeats several times;
Primary vector unit obtains the spy of each level for obtained all sampling characteristic patterns to be carried out vectorization
Sign vector.
The present invention can extract the feature vector of feature rich and stabilization, can adequately describe facial image,
Increase certification accuracy rate.
Another improvement of the device of face authentication as the present invention, multi-layer depth convolutional network pass through softmax
Sorter network joint training obtains, including:
Second extraction module, for being extracted successively using the multi-layer depth convolutional network of initialization to facial image sample
Go out the feature vector of multiple levels;
Third mapping block, for the feature vector of multiple levels to be passed sequentially through the mapping of unified dimensional Linear Mapping matrix
For the unified dimensional feature vector of same dimension;
4th mapping block, for special to unified dimensional respectively using Linear Mapping matrix in softmax sorter networks
Sign vector is mapped, and map vector is obtained;
Active module, for, into line activating, obtaining network output valve vector using softmax function pairs map vector;
First computing module, for using the label data of network output valve vector sum facial image sample as input quantity, leading to
It crosses cross entropy loss function and calculates network error;
Second serial module structure, for each unified dimensional feature vector to be connected into a union feature vector;
5th mapping block, it is comprehensive for mapping to obtain union feature vector by linear dimensionality reduction mapping matrix progress dimensionality reduction
Close feature vector;
Second computing module, for for network error distribute weight, and calculate Linear Mapping matrix, unified dimensional linearly reflects
Penetrate the update gradient of matrix, linear dimensionality reduction mapping matrix and convolution kernel;
Update module, for using Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear dimensionality reduction mapping matrix and
The update gradient of convolution kernel is to Linear Mapping matrix, unified dimensional Linear Mapping matrix, linear dimensionality reduction mapping matrix and convolution kernel
It is iterated update;
Judgment module, if so, terminating, otherwise, goes to for judging whether network error and iterations meet the requirements
Two extraction modules.
The present invention carries out joint training by softmax sorter networks, further avoids gradient disperse problem, and can
Further to increase the flexibility ratio of e-learning by being weighted to sorter network error.
Another improvement of the device of face authentication as the present invention, the first comparing module include:
First computing unit, for the multi-feature vector of facial image to be certified and facial image template to obtain
Multi-feature vector is that input quantity carries out cosine similarity operation, obtains cosine similarity;
Second computing unit, for the multi-feature vector of facial image to be certified and facial image template to obtain
Multi-feature vector is that input quantity carries out absolute value normalization cosine operation, obtains absolute value normalization cosine value;
Third computing unit, multi-feature vector and facial image template for facial image to be certified to obtaining
Multi-feature vector carries out modulus operation, and it is long with the second mould to obtain the first mould length;
Secondary vector unit, for cosine similarity, absolute value normalization cosine value, the first mould length is long with the second mould
Form a four-dimensional difference vector;
Map unit obtains one-dimensional vector, as comparison for using difference vector mapping matrix to map difference vector
Score value;
Comparing unit is compared for that will compare score value with threshold value is compared, and threshold value, people are compared if comparing score value and being more than
Face certification passes through.
The cosine similarity for comparing feature, absolute value normalization cosine value and two character modules length are combined as by the present invention
One four-dimensional difference vector, carries out linear discriminant analysis, further improves certification accuracy rate.
With a specific embodiment, present invention is described below:
The present invention needs to be trained before certification, and specific flow is as shown in figure 4, training process is as follows:
Present invention firstly provides new convolutional networks to extract image feature vector, and the accumulation of multi-layer Fusion Features assigns power deeply
Spend convolutional network (multi-layer depth convolutional network), then utilize softmax networks and learning process shown in Fig. 3 to image into
Row feature learning.
Network learning procedure includes mainly the back-propagating of the forward calculation and network error of network.
(A) convolutional network forward calculation
Basic convolutional network (is not use of the present invention as shown in figure 5, noticing that Fig. 5 is the example of a convolutional network
Convolutional network, convolutional network of the invention is:Convolution, activation, down-sampling ...) include convolution operation, activation operation and under
Sampling operation generally also needs to carry out vectorization operation for follow-up convenience of calculation.In figure 6, each layer of convolutional network is all
Indicate a basic convolutional network, the sequence and number of various operations wherein included can be set according to particular problem
It is fixed.
Convolution operation has a different modes, and the present invention uses the convolution operation of same forms, when operation to input picture into
Row zero padding.The obtained characteristic pattern of convolution operation of same forms is identical as input picture full size.
According to convolutional calculation formula, can obtain, when input data is two dimensional image, the calculating of convolution characteristic pattern element
Formula, such as formula (2):
Wherein, ckIndicate k-th of convolution kernel of convolution operation, ck(i, j) indicates ckThe i-th row, jth row element, scIt indicates
The length of side of convolution kernel, MCkIndicate input picture I and ckThe convolution characteristic pattern that convolution obtains, MCk(m, n) indicates MCkM rows, n-th
The element of row, neighborhood (m, n, sc) indicate centered on (m, n), length of side scNeighborhood,Indicate same shapes
The convolution operation of formula accords with.
And if when input data be pass through dry run obtain characteristic pattern when, the calculation formula of convolution characteristic pattern element, such as
Formula (3):
To the convolution characteristic pattern M obtained by convolution operationCkIt is operated into line activating, refers to by MCkEach element be input to
It is mapped in activation primitive f, such as formula (4):
MAk(m, n)=f (MCk(m,n)). (4)
Wherein, MAkIndicate MCkThe activation characteristic pattern obtained by activation primitive, f indicate activation primitive.
The present invention uses ReLU activation primitives.
F (x)=ReLU (x)=max (0, x) (5)
The activation characteristic pattern M that activation operation is obtainedAkDown-sampling operation is done, feature is mainly reduced by way of sampling
Dimension, further compression and abstract characteristics of image.
Down-sampling operates the s that input data is divided into no coincidence firsts×ssFritter, ssIndicate the length of side of sampling core,
Then the data of each sub-block are input in sampling function and are mapped, mapping output is the corresponding sampled value of sub-block, such as
Formula (6):
MSk(m, n)=s (MAk(ss·(m-1)+1:ss·m,ss·(n-1)+1:ss·n)) (6)
Wherein, MSkIndicate MAkThe sampling characteristic pattern obtained by sampling function, MSk(m, n) indicates MSkM rows, the n-th row
Element, s indicate sampling function.Fig. 8 illustrates the process for the input data progress down-sampling operation for being 4 × 4 to size, wherein
ss=2.
The present invention is sampled using maximum value.
Maximum value is sampled using the maximum value of sampling block interior element value as the feature of sampling block, such as formula (7):
S (I)=max (I) (7)
In image procossing, maximum value samples the texture information that can extract image, and maintains figure to a certain extent
Certain invariance of picture, such as rotation, translation, scaling;In addition, tested according to statistics, for comparing average sample, maximum value
Sampling is insensitive to data changes in distribution, and feature extraction is stablized relatively.
After feature extraction, needs to carry out vectorization operation to obtained characteristic pattern, obtain feature vector fea, with
Just it inputs the feature into sorter network, and then network parameter is learnt.
Vectorization is operated such as formula (8):
Wherein, v indicates scalar data being stretched as a column vector, and concat indicates to become the vector series connection of instruction
One high dimension vector, K indicate the total number of characteristic pattern.
(B) unified dimensional Linear Mapping
Image can obtain series of features figure after have passed through the convolution several times of convolutional network, activation and down-sampling operation,
The present invention utilizes Linear Mapping, and the feature of each level is all mapped as to the feature of same dimension.Such as formula (9), n in formulafIt indicates
The dimension of unified dimensional feature vector, niIndicate feaiDimension:
(C) softmax sorter networks
Fig. 7 illustrates the basic structure of softmax networks, it is illustrated that in, fiIndicate i-th point of input feature value f
Amount, NCIndicate categorical measure, WidIndicate Linear Mapping matrix.
Herein it should be noted that when realizing Linear Mapping using latticed form, it generally can all use and carry deviation
(bias) Linear Mapping.Since vectorial addition can realize equivalence by rewriting mapping matrix with map vector multiplication,
For the present invention in order to write conveniently, involved all Linear Mapping operation expressions are all made of rewriting form, and directly utilize original
Name variable indicates revised mapping matrix and map vector, without embodying bias in expression formula, in formula o expressions linearly reflect
Output after penetrating, o in figureiIndicate i-th of component of o.
O=Wid·f (10)
hiIndicate i-th of the component for the network output valve h that o is obtained after the activation of softmax functions:
H=softmax (o) (11)
Wherein, softmax functions are nonlinear activation functions used by softmax networks, and expression formula is:
It can be obtained by formula (12), softmax functions are " non-negative and with normalizing function ", therefore, can be defeated by its function
Go out value as " input data belongs to the probability of corresponding class ", i.e.,
hi=P (lablei=1)=P (input ∈ CLASSi). (13)
Wherein,For the two-value of data original tag LABEL (indicating the LABEL people in data set)
Vector, such as formula (14);CLASSiIt indicates the i-th class data set, all images of i-th of people is indicated in recognition of face:
Class is the categorised decision that network exports that h is provided according to network:
Identify that the face identity in image is image classification problem, the sorting algorithm that the present invention uses is softmax classification
Network, the loss function used is cross entropy loss function, such as formula (16):
Wherein, h is the network output valve vector by softmax functions in sorter network, and label is data original tag
The binary set of LABEL.
Since the parameter of network is more, it is easy to overfitting problem occur, therefore be carried out to network parameter using regularization
Limitation, to alleviate over-fitting to a certain extent, the present invention uses two norm regularizations.From the description above, network
Error can be expressed as formula (17):
J (θ)=loss (h, label)+λ Σ | | θ | |2. (17)
Wherein, J (θ) indicates that network error, λ are regularization coefficient, θ be characterized it is all in learning network can learning parameter
Set, such as the Linear Mapping matrix of formula (18), including the convolution kernel of convolutional network, sorter network:
θ={ θc, θid, θc={ c1, c2..., cK, θid=Wid (18)
The learning objective of network is to solve the parameter set θ for minimizing network error (17)opt, such as shown in (19):
In figure 6, J (Θi) indicate the network error that i-th layer of convolutional network calculates, wherein ΘiIt indicates by the 1st layer to i-th
All network parameters of layer convolutional network and current layer unified dimensional Linear Mapping matrix WiSet, such as formula (20):
Wherein, θiIndicate i-th layer of convolutional network can learning parameter set, including convolution operation, activation operation and under adopt
Sample operation involved in it is all can learning parameter.
(D) multi-layer Fusion Features and dimensionality reduction
As shown in fig. 6, featuremergeIt indicates by each level unified dimensional feature vector fiThe union feature that series connection is formed
Vector, i.e.,
WTIt indicates to combining feature vector featuremergeCarry out the mapping matrix of linear dimensionality reduction mapping, fTIndicate by
featuremergeBy the multi-feature vector that linear dimensionality reduction maps, it comprises the feature vector of each hierarchical network letters
Breath, such as formula (22), n in formulaTIndicate the f of settingTDimension:
J(ΘT) indicate to distribute to multi-feature vector fTSorter network network error;Wherein, ΘTIndicate all volumes
The set of product network parameter, all unified Linear Mapping matrixes and linear dimensionality reduction mapping matrix, such as formula (23):
(E) backpropagation of network error
The present invention is updated network parameter using BP algorithm.
According to chain rule, network error is to propagate from back to front.
Sorter network Linear Mapping derivation:
In the sorter network of i-th (i=1 ... 4, T) layer can learning parameter be Wi,id, according to J (Θi) define and asked with chain type
Inducing defecation by enema and suppository can then have:
J (Θ can be obtained simultaneouslyi) derivative about f:
Unified dimensional Linear Mapping derivation:
Each unified dimensional Linear Mapping matrix WiIt can be to J (Θi) and J (ΘT) two network errors act, therefore,
Using BP algorithm to WiWhen being updated, WiUpdate gradient be by J (Θi) to WiDerivative and J (ΘT) to WiDerivative joint
It is formed, meanwhile, in the training process, can assign one weight of each network error to sum up can obtain WiUpdate gradient,
Such as formula (26):
Can be had according to chain type Rule for derivation:
Therefore, can have
The linear dimensionality reduction of comprehensive characteristics layer maps derivation:
The linear dimensionality reduction mapping matrix W of comprehensive characteristics layerTOnly to J (ΘT) act, it is easy according to chain type Rule for derivation
It obtains:
Meanwhile the derivative that the input feature value of each level unified dimensional Linear Mapping can be calculated is:
Convolutional network parameter derivation:
The parameter that can learn in convolutional network only has the convolution kernel in convolution operation, and therefore, it is necessary to calculate J (Θi) about
The update gradient of each level convolution kernel c.Can be had according to chain type Rule for derivation:
Wherein,
Location indicates MSValue in MAIn position binaryzation matrix, i.e.,:
Above-mentioned is that the accumulation of multi-layer Fusion Features is utilized to assign the original that power depth convolutional network carries out the process of feature learning
Reason is introduced, and specific algorithm is given below, as shown in table 1:
Table 1 is to assign the process that power depth convolutional network carries out feature learning using the accumulation of multi-layer Fusion Features.
It can be carried out the verification process of the present invention below:
(1) image preprocessing
The present invention uses the Face datection algorithm based on cascade Adaboost to carry out Face datection to image, then utilizes base
Positioning feature point carried out to the face that detected in the facial modeling algorithm of SDM, and by image scaling, rotation with
Alignment is corrected face and is normalized in translation, finally obtains the facial image that size is 100*100, and in the picture, left
The image coordinate of eye is (30,30), and the image coordinate of right eye is (30,70), as shown in Figure 3.
The present invention is pre-processed using simple gray scale normalization, and such as following formula (1), I (i, j) indicates image (i, j) in formula
Gray value.The main purpose of gray scale normalization is easy for network processes continuous data and avoids handling larger discrete grey's value,
To avoid the occurrence of abnormal conditions.
(2) feature extraction
Characteristics of image is extracted using the network that training is completed
After the training for completing to assign the accumulation of multi-layer Fusion Features power depth convolutional network, so that it may with using training
Network extracts the feature of input picture, as shown in table 2:
(3) aspect ratio pair
(I) absolute value normalizes cosine value
Absolute value proposed by the present invention normalization cosine value (cosine normalized by absolute value,
cosAN) define such as formula (39):
Wherein,
Experiment shows that absolute value normalization cosine value comparison is sensitive to the long difference of vector field homoemorphism, and it is similar can to make up cosine
Degree ignore the long difference of vector field homoemorphism and caused by difference incomplete problem is described.
(II) more differences based on LDA merge alignment algorithm
The cosine similarity for comparing feature, absolute value normalization cosine value and two character modules length are combined as by the present invention
One four-dimensional difference vector, i.e.,
fdiff(fT1,fT2)=[cos (fT1,fT2),cosAN(fT2,fT2),|fT2|,|fT2|]T (41)
Then four-dimensional difference vector is fused to one-dimensional analog quantity using LDA (linear discriminant analysis), it is, difference to
Measure mapping matrix WLDAIt needs four-dimensional difference vector being mapped as one-dimensional vector.
sim(fT1,fT2)=WLDAfdiff (42)
Wherein WLDAIndicate the map vector obtained using LDA.
The advantageous effect that technical solution of the embodiment of the present invention is brought:
The present embodiment carries out feature learning using the accumulated weights depth convolutional network of multi-layer Fusion Features and feature carries
It takes, then utilizes the feature that more differences based on LDA merge two width facial image of alignment algorithm pair to be compared, have following five
A advantage:One, the present invention learns using convolutional network and extracts feature automatically, avoids the deficiency of manual features;Two, pass through more
Layer sorter network joint training avoids gradient disperse problem;Three, by multi-layer Fusion Features, it is abundant to increase characteristics of image
Degree, compensate for general depth network to each hierarchy characteristic processing it is insufficient, description figure is not sufficient enough to merely with high-level characteristic
The defect of picture;Four, increase the flexibility ratio of e-learning by being weighted to multistratum classification network error;Five, by based on the more of LDA
It is incomplete that difference fusion alignment algorithm solves the problems, such as that cosine similarity portrays feature vector difference.In FERET databases
Upper test achieves 99.9%, 100%, 98.8% on four word banks Fb, Fc, DupI, DupII, 99.6% certification respectively
Rate (it is 0.1% that mistake, which is sentenced to rate).
The above is the preferred embodiment of the present invention, it is noted that for those skilled in the art
For, without departing from the principles of the present invention, it can also make several improvements and retouch, these improvements and modifications
It should be regarded as protection scope of the present invention.