Summary of the invention
Neural network training method, face identification method and system provided in an embodiment of the present invention and storage medium, can be with
Solve it is existing in the prior art, when the bat of reference picture and sample image quantity serious unbalance or reference picture and sample image
When taking the photograph scene with significant difference, the low problem of face recognition accuracy rate.
To achieve the goals above, technical solution used in the embodiment of the present invention is as follows:
In a first aspect, the embodiment of the present application provides a kind of neural network training method, the neural network includes successively
The convolutional layer of connection, full articulamentum and loss function module, during single iteration training, which comprises
T feature vector is obtained, the T feature vector includes the D feature vector and M sample with reference to facial image
The feature vector of facial image, wherein the weight vector of the feature vector number D with reference to facial image and the full articulamentum
Number is identical, and each sample facial image in the M sample facial image is with described D with reference to one in facial image
It is the facial image of the same person with reference to facial image, T is equal to the sum of D and M, and T, D, M are positive integer;By the full connection
The learning rate of layer is set as zero;T-th of feature vector in the T feature vector is input in neural network, by described
The processing of convolutional layer obtains t-th of output feature vector, wherein t is the integer more than or equal to 1 and less than or equal to T, the t
A feature vector be the reference facial image in the T feature vector feature vector or sample facial image feature to
Amount;T-th of output feature vector is normalized in full articulamentum, the output after obtaining t-th of normalization
Feature vector;Judge whether t-th of feature vector is feature vector with reference to facial image, when t-th of feature to
When amount is the feature vector with reference to facial image, according to the output feature vector after described t-th normalization, adjustment is described to be connected entirely
Weight matrix after connecing layer normalization, weight matrix after the full articulamentum normalization by D Column vector groups at;In the full connection
In layer, according to the weight matrix after the output vector and full articulamentum normalization after the normalization, obtain t-th of classification to
Amount;T-th of class vector is input in loss function module, judge the classification in the loss function module to
Whether amount quantity number reaches NnIt is a, n, NnFor more than or equal to 1, integer less than or equal to T, n NnSummation is equal to T;When the damage
The class vector quantity lost in function module reaches NnWhen a, the loss function module is according to the NnA classification
Vector obtains n-th of loss function value, optimizes the convolution according to specified optimization algorithm according to n-th of loss function value
The parameter of layer, and empty the class vector of the loss function module;1 to T is successively taken to execute above steps according to t
Afterwards, the target component of the convolutional layer is obtained.
In the embodiment of the present application, in single iteration, the weight matrix of full articulamentum needs to be passed through according to reference facial image
The output vector of convolutional layer is crossed to adjust, therefore, during training, is needed the learning rate of the full articulamentum of neural network
It is set as zero, to guarantee when being trained using reference feature vector and sampling feature vectors to neural network, full articulamentum energy
It is enough not influenced by neural network backpropagation.In single iteration, feature vector is sequentially inputted in neural network, when defeated
When the feature vector entered into neural network is the feature vector with reference to facial image, the feature vector of reference facial image is passed through
The processing for crossing convolutional layer obtains exporting feature vector with reference to output feature vector according to reference, adjusting the power square of full articulamentum
Battle array, when reference picture is far smaller than the photographed scene of sample image or reference picture and sample image with significant difference, root
The weight vector of full articulamentum is adjusted with reference to feature vector is exported according to described, helps to promote reference picture and sample image compares
The promotion of neural network performance under scene, can also accelerate the convergence rate of neural network training process.
Further, in single iteration, by the feature vector of reference facial image and sample facial image successively through pulleying
The processing of lamination, full articulamentum and the loss function module, obtains multiple penalty values, according to each penalty values and specified
Optimization algorithm, the parameter for adjusting convolutional layer is final argument.In face recognition application, face is substituted with the final argument
The convolution layer parameter of identifying system can identify whether the reference picture and the corresponding face of sample image are same face.
And when being far smaller than sample facial image with reference to facial image in training process, due to the parameter in the training convolutional layer
When, the weight matrix of full articulamentum is adjusted according to the output feature vector of reference picture, is handled by the convolutional layer
The output vector of the sample facial image of output and the output vector of reference facial image can more accurately embody sample face
Feature and refer to face characteristic, therefore, be based on the convolution layer parameter, by face identification system to reference picture feature vector
It is compared with sample image feature vector with good accuracy.
With reference to first aspect, the embodiment of the present application provides the first possible embodiment of first aspect, when described
When t-th of feature vector is the feature vector with reference to facial image, according to the output feature vector after described t-th normalizationWeight matrix after adjusting the full articulamentum normalizationIt include: when t-th of feature vector is d-th of reference man
When the feature vector of face image, by the output vector of the reference picture after t-th of normalizationReplace with the full articulamentum
D-th of weight vector;Correspondingly, in the single iteration, above-mentioned step has all been executed with reference to face feature vector to described D
Weight matrix after rapid, after obtaining the full articulamentum normalization being made of the output vector of the reference picture after D normalization
In the neural network training method of the embodiment of the present application, replaced with the output vector of the reference picture after normalization
For the weight vectors of full articulamentum, can, so that the optimization process of neural network is more focused on indicating characteristic angle in feature vector
Partial change, to facilitate the performance of improvement neural network.
With reference to first aspect, the embodiment of the present application provides second of possible embodiment of first aspect, described
In full articulamentum, according to the output vector after the normalizationWith the weight matrix after the full articulamentum normalizationIt obtains
T-th of class vector ft, which comprises by the sample facial image output vector after the normalizationWith the normalizing
Weight matrix after changeIt is multiplied, obtains class vector ft, whereinCorrespondingly, in the single iteration, to institute
It states after T feature vector all executed above-mentioned steps, obtains a class vectors of T.
In the embodiment of the present application, according to the output vector after the normalizationAfter the full articulamentum normalization
Weight matrixObtain t-th of class vector ftMethod are as follows: by the sample facial image output vector after the normalizationWith
Weight matrix after the normalizationIt is multiplied, i.e. class vectorObtained class vector ftIt can be good at expressing
The characteristic of division of sample face feature vector.
The possible embodiment of second with reference to first aspect, the embodiment of the present application provide the third of first aspect
Possible embodiment, the loss function module is according to the NnA class vector obtains n-th of loss function value,
It include: that the cosine measurement zooming parameter s in the loss function and another remaining in loss function is arranged according to specified rule
String measures zooming parameter m, wherein m is more than or equal to 0 and is less than or equal to 1;According to the N in the loss function modulenA institute
State class vectorWith following formula, n-th of penalty values L is obtainedn, wherein j successively takes 1 to NnInteger, NnTo calculate n-th
The class vector number that penalty values need, correspondingly, njSuccessively takeIt arrivesInteger,
Wherein, described n-thjA class vectorWith yjIt is a to correspond to the same face with reference to facial image,For
Class vectorIn yjA value, yjFor the integer for being less than or equal to D more than or equal to 1.
In the embodiment of the present application, by the way that the classification value of certain amount feature vector is input to the loss function module
In, by the processing of loss function module, penalty values are obtained, obtained penalty values can react in neural network well rolls up
The performance of lamination.In single iteration, available multiple penalty values.Wherein, the zooming parameter of the COS distance in loss function,
Facilitate mind to widen by network by sample face and with reference to the characteristic distance of face, so as to improve identification model performance.
The third possible embodiment with reference to first aspect, the embodiment of the present application provide the 4th kind of first aspect
Possible embodiment, the specified optimization algorithm is stochastic gradient descent method, according to n-th of loss function value, according to finger
Fixed optimization algorithm optimizes the parameter of the convolutional layer, comprising: according to preset condition, is arranged under the stochastic gradient
The parameter of drop method;The parameter of the convolutional layer is obtained according to the stochastic gradient descent method according to n-th of penalty values, and
Adjust the parameter of the convolutional layer.
In the embodiment of the present application, the optimization algorithm of the specified optimization neural network convolutional layer is needed, and according to described
Penalty values and specified optimization algorithm, optimize the parameter of the neural network convolutional layer, available after successive ignition
The final argument of the neural network convolutional layer.
With reference to first aspect, the embodiment of the present application provides the 5th kind of possible embodiment of first aspect, is carrying out
Before first repetitive exercise, the method also includes: initialize the parameter of the convolutional layer and the weight of the full articulamentum.
In the embodiment of the present application, before carrying out first repetitive exercise, need to initialize the convolutional layer parameter and
The weight of the full articulamentum, to ensure that first iteration can normally start.
With reference to first aspect, the embodiment of the present application provides the 6th kind of possible embodiment of first aspect, comprising: holds
Row successive ignition, until meeting pre-set stopping criterion for iteration, in any iteration twice of successive ignition, the institute of acquisition
It is identical to state T feature vector.
In the embodiment of the present application, a training process needs to be implemented successive ignition, feature used in each iteration to
It measures identical.
Second aspect, a kind of face identification method provided by the embodiments of the present application, comprising: by as described in relation to the first aspect
The method progress convolutional layer that repetitive exercise obtains at least once is to reference picture feature vector and sample image feature vector
Feature extraction is carried out respectively, obtains fixed reference feature and sample characteristics;Calculate the similar of the fixed reference feature and the sample characteristics
Degree;The similarity that face is corresponded to according to the fixed reference feature and the sample characteristics judges the sample image feature vector pair
The face and the corresponding face of the reference picture feature vector answered whether be same people face.
In the embodiment of the present application, the convolutional layer obtained by first aspect, to reference picture feature vector and sample graph
As feature vector carries out feature extraction respectively, fixed reference feature and sample characteristics are obtained;According to fixed reference feature and sample characteristics, calculate
The similarity of fixed reference feature and the sample characteristics;Then the phase of face is corresponded to according to the fixed reference feature and the sample characteristics
Like degree, judge the corresponding face of the sample image feature vector and the corresponding face of the reference picture feature vector whether be
The face of same people.The obtained face identification system of the embodiment of the present application is judging reference picture feature vector and sample image
When whether the corresponding face of feature vector is same face, there is good effect and accuracy.
The third aspect, a kind of face identification system provided by the embodiments of the present application, comprising: by as described in relation to the first aspect
Method carries out the convolutional layer that repetitive exercise obtains at least once, and the convolutional layer is used for reference picture feature vector and sample graph
As feature vector carries out feature extraction respectively, fixed reference feature and sample characteristics are obtained;COS distance computing module, for calculating
State the similarity of fixed reference feature and the sample characteristics;Judgment module, for according to the fixed reference feature and the sample characteristics
The similarity of corresponding face judges that the corresponding face of the sample image feature vector and the reference picture feature vector are corresponding
Face whether be same people face.
Fourth aspect, the embodiment of the present application provide a kind of storage medium, instruction are stored on the storage medium, work as institute
Instruction is stated when running on computers, so that the computer any possible implementing of executing in first aspect or second aspect
Method in mode.
Other feature and advantage of the disclosure will illustrate in the following description, alternatively, Partial Feature and advantage can be with
Deduce from specification or unambiguously determine, or by implement the disclosure above-mentioned technology it can be learnt that.
To enable the above objects, features, and advantages of the application to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate
Appended attached drawing, is described in detail below.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Ground description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.Therefore, with
Under claimed scope of the present application is not intended to limit to the detailed description of the embodiments herein provided in the accompanying drawings,
But it is merely representative of the selected embodiment of the application.Based on embodiments herein, those skilled in the art are not going out
Every other embodiment obtained, shall fall in the protection scope of this application under the premise of creative work.
First embodiment
Present embodiments provide a kind of neural network training method and a kind of face identification method, it should be noted that
The step of process of attached drawing illustrates can execute in a computer system such as a set of computer executable instructions, also,
It, in some cases, can be to be different from shown in sequence execution herein although logical order is shown in flow charts
The step of out or describing.It describes in detail below to the present embodiment.
Referring to Fig. 1, neural network training method provided in this embodiment, including sequentially connected convolutional layer, full connection
Layer and loss function module.
The method may include successive ignition training.
Referring to Fig. 2, the single iteration of the training process, comprising: step S100, step S200, step S300, step
S400, step S500, step S600, step S700, step S800, step S900.
Step S100: obtaining T feature vector, and the T feature vector includes the D feature vector with reference to facial image
With the feature vector of M sample facial image, wherein the feature vector number D with reference to facial image and the full connection
The weight vector number of layer is identical, and each sample facial image in the M sample facial image refers to face figure with described D
A reference facial image as in is the facial image of the same person, and T is equal to the sum of D and M, and T, D, M are positive integer.
Referring to Fig. 2, in the single iteration of neural network training process, need to obtain with reference to face feature vector and
The set of eigenvectors of sample face, with reference to the feature vector number and the weight vector number phase of the full articulamentum of facial image
Together, the feature vector of sample face concentrates the corresponding face with reference in face of each sample face.
For example, in the application scenarios that certificate photo and living photo compare, the single iteration of neural network training process,
It needs to obtain the feature vector of D certificate photos and the set of eigenvectors of M living photos, forms T set of eigenvectors.Wherein, it demonstrate,proves
The number D that part shines is identical as the number of full articulamentum weight vector, and each living photo is opened in certificate photos with D in M living photo
One certificate photo corresponds to the same face, and the total number of the feature vector of the feature vector and living photo of certificate photo is T.Specifically
, can take D is 50000, M 1280000, that is, obtains the feature vector and 1280000 living photos of 50000 certificate photos
Set of eigenvectors forms 1330000 feature vectors.
Step S200: the learning rate of the full articulamentum is set as zero.
Referring to Fig. 2, need the learning rate of full articulamentum to be set as zero in the single iteration of neural network training process,
It is influenced with ensuring full articulamentum not by backpropagation in neural network training process.
For example, in the application scenarios that certificate photo and living photo compare, the single iteration of neural network training process
In, the learning rate of the full articulamentum is set as zero to ensure full articulamentum not by the shadow of backpropagation in neural network training process
It rings.
Step S300: t-th of feature vector in the T feature vector is input in neural network, by described
The processing of convolutional layer obtains t-th of output feature vector xt, wherein t is the integer more than or equal to 1 and less than or equal to T, described
T-th of feature vector is the feature vector of the reference facial image in the T feature vector or the feature of sample facial image
Vector.
Referring to Fig. 2, T feature vector is sequentially sent to nerve in the single iteration of neural network training process
In network, by the processing of convolutional layer, T output feature vector is successively obtained, wherein t-th of output feature vector is xt。
For example, it in the application scenarios that certificate photo and living photo compare, needs 1330000 feature vectors successively
It is sent in neural network, by the processing of convolutional layer, successively obtains 1330000 output feature vectors, wherein t-th defeated
Feature vector is x outt。
Step S400: t-th of output feature vector is normalized in full articulamentum, is obtained t-th
Output feature vector after normalization
Referring to Fig. 2, needing to export feature vector by convolutional layer in the single iteration of neural network training process
Output vector, be normalized in full articulamentum, the output feature vector after being normalized, wherein return for t-th
Output feature vector after one change
For example, in the application scenarios that certificate photo and living photo compare, the feature by certificate photo and living photo is needed
The output vector of certificate photo and living photo of the vector by convolutional layer output, and be normalized in full articulamentum, it obtains
The output feature vector of certificate photo and living photo after to normalization, wherein the output feature vector after t-th of normalization
Step S500: judge whether t-th of feature vector is feature vector with reference to facial image, as the t
When a feature vector is the feature vector with reference to facial image, according to the output feature vector after described t-th normalizationIt adjusts
Weight matrix after the whole full articulamentum normalizationWeight matrix after the full articulamentum normalizationBy D Column vector groups
At.
Referring to Fig. 2, in the single iteration of neural network training process, need according to the feature of reference facial image to
Amount, to adjust the weight matrix of full articulamentum.
For example, in the application scenarios that certificate photo and living photo compare, the feature according to 50000 certificate photos is needed
Vector, to adjust the weight matrix that 50000 weight vectors of full articulamentum form.
Optionally, step S500 includes: when t-th of feature vector is d-th of feature vector with reference to facial image
When, by the output vector of the reference picture after t-th of normalizationReplace with d-th of weight vector of the full articulamentum;Accordingly
Ground after all having executed above-mentioned steps with reference to face feature vector to described D, obtains being returned by D in the single iteration
Weight matrix after the full articulamentum normalization of the output vector composition of reference picture after one changeIn neural metwork training mistake
In the single iteration of journey, according to the feature vector of reference facial image, come adjust full articulamentum weight matrix method are as follows: work as institute
When to state t-th of feature vector be d-th of feature vector with reference to facial image, by the defeated of the reference picture after t-th of normalization
Outgoing vectorReplace with d-th of weight vector of the full articulamentum.After having executed single iteration, after full articulamentum normalization
Weight matrixIt is made of the output vector of the reference picture after D normalization.For example, it is compared in certificate photo and living photo
Application scenarios in, according to the feature vector of certificate photo, come adjust full articulamentum weight matrix method are as follows: by d-th of certificate
D-th of weight vector that full articulamentum is replaced with according to the output vector after normalization, after having executed single iteration, full articulamentum is returned
Weight matrix after one changeIt is made of the output vector of the certificate photo after D normalization.
Step S600: in the full articulamentum, according to the output vector after the normalizationWith the full articulamentum
Weight matrix after normalizationObtain t-th of class vector ft。
Referring to Fig. 2, in the single iteration of neural network training process, it, will be by convolutional layer in full articulamentum
After the output vector that reason obtains is normalized, need according to the power after the output vector and normalization after the normalization
Matrix obtains class vector.
For example, it in the application scenarios that certificate photo and living photo compare, changes in the single of neural network training process
Dai Zhong, by after the output vector that convolutional layer is handled is normalized, is needed according to certificate in full articulamentum
According to or living photo normalization after output vector, and normalization after weight matrix, obtain class vector.
Optionally, step S600 includes: by the sample facial image output vector after the normalizationWith the normalizing
Weight matrix after changeIt is multiplied, obtains class vector ft, whereinCorrespondingly, in the single iteration, to institute
It states after T feature vector all executed above-mentioned steps, obtains a class vectors of T.In the list of neural network training process
In secondary iteration, t-th of class vector f is obtainedtMethod are as follows: by the facial image output vector after the normalizationWith it is described
Weight matrix after normalizationIt is multiplied, in other words, class vectorFor example, in certificate photo and living photo ratio
Pair application scenarios in, in the single iteration of neural network training process, obtain t-th of class vector ftMethod are as follows: will
The output vector of certificate photo or living photo after the normalizationWith the weight matrix after the normalizationIt is multiplied.
Step S700: t-th of class vector is input in loss function module, judges the loss function module
In the class vector quantity number whether reach NnIt is a, n, NnFor more than or equal to 1, integer less than or equal to T, n NnSummation
Equal to T.
Referring to Fig. 2, needing the loss letter exported according to loss function in the single iteration of neural network training process
Numerical value, optimizes the parameter of convolutional layer according to specified optimization algorithm, and each loss function value needs in loss function module
It is obtained according to multiple class vectors by calculating, therefore when t-th of class vector is input in loss function module, needs to sentence
Whether the class vector quantity number that breakdown loses in function module reaches the specified number for calculating loss function value.
For example, in the application scenarios that certificate photo and living photo compare, when certificate photo or the class vector of living photo
When being input in loss function module, need to judge whether the class vector quantity number in loss function module reaches meter
Calculate the specified number of loss function value.
Step S800: when the class vector quantity in the loss function module reaches NnWhen a, the loss letter
Digital-to-analogue root tuber is according to the NnA class vector obtains n-th of loss function value, according to n-th of loss function value, according to finger
Fixed optimization algorithm, optimizes the parameter of the convolutional layer, and empties the class vector of the loss function module.
Referring to Fig. 2, needing basis to be input to loss function module in the single iteration of neural network training process
T class vector obtains multiple loss function values, wherein n-th of loss function value needs in loss function module according to Nn
A class vector is obtained by calculating.After n-th of loss function value, need to optimize convolutional layer according to specified optimization algorithm
Parameter, and the class vector of the loss function module is emptied, conveniently recalculate class vector in loss function module
Number.After having executed an iteration, loss function module can export multiple loss function values, correspondingly, the parameter of convolutional layer
Also it can be updated repeatedly.
For example, in the application scenarios that certificate photo and living photo compare, by 1330000 from the feature of certificate photo to
The set of eigenvectors that amount and the feature vector of living photo form is input in neural network, and loss function module can be sequentially output more
A loss function value, as soon as every output penalty values, the parameter of a convolutional layer is updated according to penalty values.
Optionally, step S800 includes: that the cosine measurement scaling ginseng in the loss function is arranged according to specified rule
Another cosine in number s and loss function measures zooming parameter m, wherein m is more than or equal to 0 and is less than or equal to 1;According to the damage
Lose the N in function modulenA class vectorWith following formula, n-th of penalty values L is obtainedn, wherein j successively takes
1 arrives NnInteger, NnTo calculate the class vector number that n-th of penalty values needs, correspondingly, njSuccessively takeIt arrivesInteger,
Wherein, described n-thjA class vectorWith yjIt is a to correspond to the same face with reference to facial image,For
Class vectorIn yjA value, yjFor the integer for being less than or equal to D more than or equal to 1.For example, in certificate photo and living photo
In the application scenarios of comparison, the cosine measurement zooming parameter s being arranged in loss function is 45, m 0.35.By living photo or certificate
According to classification valueIt is input in the loss function module, j successively takes 1 to NnInteger, obtain n-th of penalty values Ln.Power
Matrix W is made of 50000 weight vectors, wherein yjA weight vector is and the y in 50000 certificate photosjA certificate photo pair
The weight vector for the full articulamentum answered.
Optionally, the specified optimization algorithm is stochastic gradient descent method, according to n-th of loss function value, according to finger
Fixed optimization algorithm optimizes the parameter of the convolutional layer, comprising: according to preset condition, is arranged under the stochastic gradient
The parameter of drop method;The parameter of the convolutional layer is obtained according to the stochastic gradient descent method according to n-th of penalty values, and
Adjust the parameter of the convolutional layer.For example, in the application scenarios that certificate photo and living photo compare, boarding steps can be set
The momentum parameter for spending descent method is 0.9, and the learning rate of convolutional layer is 0.1, according to n-th of penalty values, according to described random
Gradient descent method obtains the n parameter of the convolutional layer, and the parameter for adjusting the neural network convolutional layer is n-th of ginseng
Number.
Step S900: successively taken according to t 1 to T execute above-mentioned steps S100, step S200, step S300, step S400,
Step S500, step S600, S700, step S800, obtain the target component of the convolutional layer.
Referring to Fig. 2, need successively to take t into 1 to T in the single iteration of neural network training process, it will be single with guarantee
All samples that secondary iteration obtains, which are input in neural network, to be handled.
For example, in the application scenarios that certificate photo and living photo compare, t is got into T from 1, to guarantee to change single
The feature vector of feature vector and living photo that all indentations that generation obtains shines is input in neural network.
Optionally, before step S900, further includes: initialize the parameter of the convolutional layer and the power of the full articulamentum
Value.It for example,, can be by institute before carrying out first repetitive exercise in the application scenarios that certificate photo and living photo compare
The weight of the parameter and the full articulamentum of stating convolutional layer is set as random value.
Optionally, after step S900, further includes: successive ignition is executed, until meeting pre-set iteration ends
Condition, in any iteration twice of successive ignition, the T feature vector of acquisition is identical.For example, in certificate photo and
In the application scenarios that living photo compares, need to carry out successive ignition, the feature of certificate photo and living photo that each iteration is used to
Amount is identical.
Second embodiment
Face identification method provided in this embodiment, comprising: carried out at least by method as described in the first embodiment
The convolutional layer that an iteration training obtains carries out feature to reference picture feature vector and sample image feature vector respectively
It extracts, obtains fixed reference feature and sample characteristics;Calculate the similarity of the fixed reference feature and the sample characteristics;According to the ginseng
It examines feature and the sample characteristics corresponds to the similarity of face, judge the corresponding face of the sample image feature vector and described
The corresponding face of reference picture feature vector whether be same people face.For example, it is compared in certificate photo and living photo
In application scenarios, according to the progress of method described in embodiment one convolutional layer that repetitive exercise obtains at least once to certificate
According to feature vector and the feature vector of living photo carry out feature extraction respectively, calculate the certificate photo feature and the living photo
The similarity of feature;The similarity that face is corresponded to according to the certificate photo feature and the living photo feature, judges the certificate
According to the corresponding face of feature vector and the living photo the corresponding face of feature vector whether be same people face.Specifically
, under conditions of millesimal rate of false alarm, when the method according to embodiment one be trained 120 it is small when iteration
Training obtains the convolutional layer, and the convolutional layer by obtaining distinguishes the feature vector of certificate photo and the feature vector of living photo
Feature extraction is carried out, finally judges that the feature vector of the corresponding face of the feature vector of the certificate photo and the living photo is corresponding
Face whether be that the accuracy rate of face of same people can achieve 60% or more, and in the prior art, NormFace and
The accuracy rate of CosFace is respectively 9.2% and 32.2%;Accordingly;Under a ten thousandth rate of false alarm scene, when according to implementation
The repetitive exercise that method described in example one is trained 120 hours obtains the convolutional layer, and the convolutional layer pair by obtaining
The feature vector of certificate photo and the feature vector of living photo carry out feature extraction respectively, finally judge the feature of the certificate photo to
Whether the corresponding face of feature vector for measuring corresponding face and the living photo is that the accuracy rate of face of same people can reach
To 40% or more, and in the prior art, the accuracy rate of NormFace and CosFace are respectively 2.6% and 19.3%.
In traditional life according to comparing in scene, repetitive exercise at least once is carried out according to method described in embodiment one and is obtained
The convolutional layer feature extraction is carried out respectively to the feature vector of different living photos, calculate the similar of different living photo features
Degree;The similarity that face is corresponded to according to different living photo features judges the corresponding face of different respective feature vectors of living photo
Whether be same people face.Specifically, under one thousandth rate of false alarm scene, when the method according to embodiment one into
Row 120 hours repetitive exercises of training obtain the convolutional layer, and the convolutional layer by obtaining to the features of different living photos to
Amount carries out feature extraction respectively, finally judges whether the corresponding face of the respective feature vector of the different living photos is same people
Face accuracy rate be 94.8%, the accuracy rate of NormFace and CosFace is respectively 85.0% He in the prior art
91.9%, under a ten thousandth rate of false alarm scene, when the method according to embodiment one be trained 120 it is small when iteration
Training obtains the convolutional layer, and the convolutional layer by obtaining carries out feature extraction to the feature vector of different living photos respectively,
Finally judge whether the corresponding face of the different respective feature vector of living photo is that the accuracy rate of face of same people is
92.2%, the accuracy rate of NormFace and CosFace is respectively 73.2% and 87.9% in the prior art.
3rd embodiment
The embodiment of the present application provides a kind of face identification system, referring to Fig. 3, including:
The convolutional layer that repetitive exercise obtains at least once, the convolutional layer are carried out by method as in the first embodiment
For carrying out feature extraction respectively to reference picture feature vector and sample image feature vector, obtains fixed reference feature and sample is special
Sign;
COS distance computing module, for calculating the similarity of the fixed reference feature and the sample characteristics;
Judgment module, for corresponding to the similarity of face according to the fixed reference feature and the sample characteristics, described in judgement
The corresponding face of sample image feature vector and the corresponding face of the reference picture feature vector whether be same people face.
The judgment module can be the reference picture feature vector and sample image feature vector that input is determined using preset threshold value
Corresponding face whether be same face judgment module.
In the present embodiment, the convolution that repetitive exercise obtains at least once is carried out by method as in the first embodiment
Layer carries out the fixed reference feature and sample characteristics that feature raises to reference picture feature vector and sample image feature vector respectively
It can be good at expressing the feature of reference picture and the feature of sample image, to ensure COS distance computing module and judgment module
The accuracy with higher of output result.
Fourth embodiment
The embodiment of the present application provides a kind of storage medium, and instruction is stored on the storage medium, when described instruction exists
When being run on computer, so that the computer executes the face identification method in above-mentioned first embodiment and second embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that the present invention can lead to
Hardware realization is crossed, the mode of necessary general hardware platform can also be added to realize by software, based on this understanding, this hair
Bright technical solution can be embodied in the form of software products, which can store in a non-volatile memories
In medium (can be CD-ROM, USB flash disk, mobile hard disk etc.), including some instructions are used so that computer equipment (can be with
It is personal computer, server or the network equipment etc.) method that executes each implement scene of the present invention.
More than, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, and it is any to be familiar with
Those skilled in the art within the technical scope of the present application, can easily think of the change or the replacement, and should all cover
Within the protection scope of the application.Therefore, the protection scope of the application should be subject to the protection scope in claims.