CN109711358A - Neural network training method, face identification method and system and storage medium - Google Patents

Neural network training method, face identification method and system and storage medium Download PDF

Info

Publication number
CN109711358A
CN109711358A CN201811630038.1A CN201811630038A CN109711358A CN 109711358 A CN109711358 A CN 109711358A CN 201811630038 A CN201811630038 A CN 201811630038A CN 109711358 A CN109711358 A CN 109711358A
Authority
CN
China
Prior art keywords
feature vector
vector
face
feature
normalization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811630038.1A
Other languages
Chinese (zh)
Other versions
CN109711358B (en
Inventor
孔彦
吴富章
赵宇航
赵玉军
王黎明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Yuan Jian Technology Co Ltd
Original Assignee
Sichuan Yuan Jian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Yuan Jian Technology Co Ltd filed Critical Sichuan Yuan Jian Technology Co Ltd
Priority to CN201811630038.1A priority Critical patent/CN109711358B/en
Publication of CN109711358A publication Critical patent/CN109711358A/en
Application granted granted Critical
Publication of CN109711358B publication Critical patent/CN109711358B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The application provides neural network training method, face identification method and system and storage medium, is related to field of face identification.The neural network training method of the application includes: the weight vector for adjusting full articulamentum by the output vector of convolutional layer according to reference feature vector;According to the penalty values that feature vector is obtained by loss function layer, according to specified optimization algorithm, the parameter for optimizing convolutional layer obtains final convolution layer parameter.Correspondingly, the training method based on neural network provided by the present application, present invention also provides a kind of face identification methods.Face identification method provided by the present application is compared with the prior art, when the reference picture and sample image quantity serious unbalance in training process, or the photographed scene of reference picture and sample image, when having significant difference, the effect and accuracy rate of recognition of face are greatly improved;Meanwhile in some general face recognition application scenes, the face identification method of the embodiment of the present application also has good recognition of face effect.

Description

Neural network training method, face identification method and system and storage medium
Technical field
This application involves technical field of face recognition, in particular to neural network training method, face identification method And system and storage medium.
Background technique
Recognition of face is a kind of technology for identifying different people identity based on human face's external performance, and application scenarios are wide It is general, as long as correlative study and the existing many decades of application.With the development of the relevant technologies such as big data and deep learning in recent years, people Face recognition effect has the raising advanced by leaps and bounds, further extensive in scenes applications such as authentication, video monitoring, U.S. face amusements. Wherein, the testimony of a witness compares problem, i.e., the recognition of face problem between standard certificate photo and living photo, since identification target person only needs Its certificate photo is disposed in the database, eliminated target person and acquired the trouble that living photo is registered in systems, just obtain More and more concerns.
The human face recognition model of the prior art has good recognition of face in some general recognition of face scenes Effect and accuracy rate.But in some recognition of face application scenarios, such as in the application scenarios of testimony of a witness comparison, reference picture and sample When the photographed scene of this amount of images serious unbalance or reference picture and sample image has significant difference, prior art face The recognition of face effect and accuracy rate of recognition methods are very low.
Summary of the invention
Neural network training method, face identification method and system provided in an embodiment of the present invention and storage medium, can be with Solve it is existing in the prior art, when the bat of reference picture and sample image quantity serious unbalance or reference picture and sample image When taking the photograph scene with significant difference, the low problem of face recognition accuracy rate.
To achieve the goals above, technical solution used in the embodiment of the present invention is as follows:
In a first aspect, the embodiment of the present application provides a kind of neural network training method, the neural network includes successively The convolutional layer of connection, full articulamentum and loss function module, during single iteration training, which comprises
T feature vector is obtained, the T feature vector includes the D feature vector and M sample with reference to facial image The feature vector of facial image, wherein the weight vector of the feature vector number D with reference to facial image and the full articulamentum Number is identical, and each sample facial image in the M sample facial image is with described D with reference to one in facial image It is the facial image of the same person with reference to facial image, T is equal to the sum of D and M, and T, D, M are positive integer;By the full connection The learning rate of layer is set as zero;T-th of feature vector in the T feature vector is input in neural network, by described The processing of convolutional layer obtains t-th of output feature vector, wherein t is the integer more than or equal to 1 and less than or equal to T, the t A feature vector be the reference facial image in the T feature vector feature vector or sample facial image feature to Amount;T-th of output feature vector is normalized in full articulamentum, the output after obtaining t-th of normalization Feature vector;Judge whether t-th of feature vector is feature vector with reference to facial image, when t-th of feature to When amount is the feature vector with reference to facial image, according to the output feature vector after described t-th normalization, adjustment is described to be connected entirely Weight matrix after connecing layer normalization, weight matrix after the full articulamentum normalization by D Column vector groups at;In the full connection In layer, according to the weight matrix after the output vector and full articulamentum normalization after the normalization, obtain t-th of classification to Amount;T-th of class vector is input in loss function module, judge the classification in the loss function module to Whether amount quantity number reaches NnIt is a, n, NnFor more than or equal to 1, integer less than or equal to T, n NnSummation is equal to T;When the damage The class vector quantity lost in function module reaches NnWhen a, the loss function module is according to the NnA classification Vector obtains n-th of loss function value, optimizes the convolution according to specified optimization algorithm according to n-th of loss function value The parameter of layer, and empty the class vector of the loss function module;1 to T is successively taken to execute above steps according to t Afterwards, the target component of the convolutional layer is obtained.
In the embodiment of the present application, in single iteration, the weight matrix of full articulamentum needs to be passed through according to reference facial image The output vector of convolutional layer is crossed to adjust, therefore, during training, is needed the learning rate of the full articulamentum of neural network It is set as zero, to guarantee when being trained using reference feature vector and sampling feature vectors to neural network, full articulamentum energy It is enough not influenced by neural network backpropagation.In single iteration, feature vector is sequentially inputted in neural network, when defeated When the feature vector entered into neural network is the feature vector with reference to facial image, the feature vector of reference facial image is passed through The processing for crossing convolutional layer obtains exporting feature vector with reference to output feature vector according to reference, adjusting the power square of full articulamentum Battle array, when reference picture is far smaller than the photographed scene of sample image or reference picture and sample image with significant difference, root The weight vector of full articulamentum is adjusted with reference to feature vector is exported according to described, helps to promote reference picture and sample image compares The promotion of neural network performance under scene, can also accelerate the convergence rate of neural network training process.
Further, in single iteration, by the feature vector of reference facial image and sample facial image successively through pulleying The processing of lamination, full articulamentum and the loss function module, obtains multiple penalty values, according to each penalty values and specified Optimization algorithm, the parameter for adjusting convolutional layer is final argument.In face recognition application, face is substituted with the final argument The convolution layer parameter of identifying system can identify whether the reference picture and the corresponding face of sample image are same face. And when being far smaller than sample facial image with reference to facial image in training process, due to the parameter in the training convolutional layer When, the weight matrix of full articulamentum is adjusted according to the output feature vector of reference picture, is handled by the convolutional layer The output vector of the sample facial image of output and the output vector of reference facial image can more accurately embody sample face Feature and refer to face characteristic, therefore, be based on the convolution layer parameter, by face identification system to reference picture feature vector It is compared with sample image feature vector with good accuracy.
With reference to first aspect, the embodiment of the present application provides the first possible embodiment of first aspect, when described When t-th of feature vector is the feature vector with reference to facial image, according to the output feature vector after described t-th normalizationWeight matrix after adjusting the full articulamentum normalizationIt include: when t-th of feature vector is d-th of reference man When the feature vector of face image, by the output vector of the reference picture after t-th of normalizationReplace with the full articulamentum D-th of weight vector;Correspondingly, in the single iteration, above-mentioned step has all been executed with reference to face feature vector to described D Weight matrix after rapid, after obtaining the full articulamentum normalization being made of the output vector of the reference picture after D normalization
In the neural network training method of the embodiment of the present application, replaced with the output vector of the reference picture after normalization For the weight vectors of full articulamentum, can, so that the optimization process of neural network is more focused on indicating characteristic angle in feature vector Partial change, to facilitate the performance of improvement neural network.
With reference to first aspect, the embodiment of the present application provides second of possible embodiment of first aspect, described In full articulamentum, according to the output vector after the normalizationWith the weight matrix after the full articulamentum normalizationIt obtains T-th of class vector ft, which comprises by the sample facial image output vector after the normalizationWith the normalizing Weight matrix after changeIt is multiplied, obtains class vector ft, whereinCorrespondingly, in the single iteration, to institute It states after T feature vector all executed above-mentioned steps, obtains a class vectors of T.
In the embodiment of the present application, according to the output vector after the normalizationAfter the full articulamentum normalization Weight matrixObtain t-th of class vector ftMethod are as follows: by the sample facial image output vector after the normalizationWith Weight matrix after the normalizationIt is multiplied, i.e. class vectorObtained class vector ftIt can be good at expressing The characteristic of division of sample face feature vector.
The possible embodiment of second with reference to first aspect, the embodiment of the present application provide the third of first aspect Possible embodiment, the loss function module is according to the NnA class vector obtains n-th of loss function value, It include: that the cosine measurement zooming parameter s in the loss function and another remaining in loss function is arranged according to specified rule String measures zooming parameter m, wherein m is more than or equal to 0 and is less than or equal to 1;According to the N in the loss function modulenA institute State class vectorWith following formula, n-th of penalty values L is obtainedn, wherein j successively takes 1 to NnInteger, NnTo calculate n-th The class vector number that penalty values need, correspondingly, njSuccessively takeIt arrivesInteger,
Wherein, described n-thjA class vectorWith yjIt is a to correspond to the same face with reference to facial image,For Class vectorIn yjA value, yjFor the integer for being less than or equal to D more than or equal to 1.
In the embodiment of the present application, by the way that the classification value of certain amount feature vector is input to the loss function module In, by the processing of loss function module, penalty values are obtained, obtained penalty values can react in neural network well rolls up The performance of lamination.In single iteration, available multiple penalty values.Wherein, the zooming parameter of the COS distance in loss function, Facilitate mind to widen by network by sample face and with reference to the characteristic distance of face, so as to improve identification model performance.
The third possible embodiment with reference to first aspect, the embodiment of the present application provide the 4th kind of first aspect Possible embodiment, the specified optimization algorithm is stochastic gradient descent method, according to n-th of loss function value, according to finger Fixed optimization algorithm optimizes the parameter of the convolutional layer, comprising: according to preset condition, is arranged under the stochastic gradient The parameter of drop method;The parameter of the convolutional layer is obtained according to the stochastic gradient descent method according to n-th of penalty values, and Adjust the parameter of the convolutional layer.
In the embodiment of the present application, the optimization algorithm of the specified optimization neural network convolutional layer is needed, and according to described Penalty values and specified optimization algorithm, optimize the parameter of the neural network convolutional layer, available after successive ignition The final argument of the neural network convolutional layer.
With reference to first aspect, the embodiment of the present application provides the 5th kind of possible embodiment of first aspect, is carrying out Before first repetitive exercise, the method also includes: initialize the parameter of the convolutional layer and the weight of the full articulamentum.
In the embodiment of the present application, before carrying out first repetitive exercise, need to initialize the convolutional layer parameter and The weight of the full articulamentum, to ensure that first iteration can normally start.
With reference to first aspect, the embodiment of the present application provides the 6th kind of possible embodiment of first aspect, comprising: holds Row successive ignition, until meeting pre-set stopping criterion for iteration, in any iteration twice of successive ignition, the institute of acquisition It is identical to state T feature vector.
In the embodiment of the present application, a training process needs to be implemented successive ignition, feature used in each iteration to It measures identical.
Second aspect, a kind of face identification method provided by the embodiments of the present application, comprising: by as described in relation to the first aspect The method progress convolutional layer that repetitive exercise obtains at least once is to reference picture feature vector and sample image feature vector Feature extraction is carried out respectively, obtains fixed reference feature and sample characteristics;Calculate the similar of the fixed reference feature and the sample characteristics Degree;The similarity that face is corresponded to according to the fixed reference feature and the sample characteristics judges the sample image feature vector pair The face and the corresponding face of the reference picture feature vector answered whether be same people face.
In the embodiment of the present application, the convolutional layer obtained by first aspect, to reference picture feature vector and sample graph As feature vector carries out feature extraction respectively, fixed reference feature and sample characteristics are obtained;According to fixed reference feature and sample characteristics, calculate The similarity of fixed reference feature and the sample characteristics;Then the phase of face is corresponded to according to the fixed reference feature and the sample characteristics Like degree, judge the corresponding face of the sample image feature vector and the corresponding face of the reference picture feature vector whether be The face of same people.The obtained face identification system of the embodiment of the present application is judging reference picture feature vector and sample image When whether the corresponding face of feature vector is same face, there is good effect and accuracy.
The third aspect, a kind of face identification system provided by the embodiments of the present application, comprising: by as described in relation to the first aspect Method carries out the convolutional layer that repetitive exercise obtains at least once, and the convolutional layer is used for reference picture feature vector and sample graph As feature vector carries out feature extraction respectively, fixed reference feature and sample characteristics are obtained;COS distance computing module, for calculating State the similarity of fixed reference feature and the sample characteristics;Judgment module, for according to the fixed reference feature and the sample characteristics The similarity of corresponding face judges that the corresponding face of the sample image feature vector and the reference picture feature vector are corresponding Face whether be same people face.
Fourth aspect, the embodiment of the present application provide a kind of storage medium, instruction are stored on the storage medium, work as institute Instruction is stated when running on computers, so that the computer any possible implementing of executing in first aspect or second aspect Method in mode.
Other feature and advantage of the disclosure will illustrate in the following description, alternatively, Partial Feature and advantage can be with Deduce from specification or unambiguously determine, or by implement the disclosure above-mentioned technology it can be learnt that.
To enable the above objects, features, and advantages of the application to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate Appended attached drawing, is described in detail below.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be to needed in the embodiment attached Figure is briefly described, it should be understood that the following drawings illustrates only some embodiments of the application, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.
Fig. 1 is the functional block diagram of neural network in the embodiment of the present application;
Fig. 2 is the flow chart of neural network training method in the embodiment of the present application;
Fig. 3 is the functional block diagram of face identification system in the embodiment of the present application.
Specific embodiment
Below in conjunction with attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Ground description, it is clear that described embodiments are only a part of embodiments of the present application, instead of all the embodiments.Therefore, with Under claimed scope of the present application is not intended to limit to the detailed description of the embodiments herein provided in the accompanying drawings, But it is merely representative of the selected embodiment of the application.Based on embodiments herein, those skilled in the art are not going out Every other embodiment obtained, shall fall in the protection scope of this application under the premise of creative work.
First embodiment
Present embodiments provide a kind of neural network training method and a kind of face identification method, it should be noted that The step of process of attached drawing illustrates can execute in a computer system such as a set of computer executable instructions, also, It, in some cases, can be to be different from shown in sequence execution herein although logical order is shown in flow charts The step of out or describing.It describes in detail below to the present embodiment.
Referring to Fig. 1, neural network training method provided in this embodiment, including sequentially connected convolutional layer, full connection Layer and loss function module.
The method may include successive ignition training.
Referring to Fig. 2, the single iteration of the training process, comprising: step S100, step S200, step S300, step S400, step S500, step S600, step S700, step S800, step S900.
Step S100: obtaining T feature vector, and the T feature vector includes the D feature vector with reference to facial image With the feature vector of M sample facial image, wherein the feature vector number D with reference to facial image and the full connection The weight vector number of layer is identical, and each sample facial image in the M sample facial image refers to face figure with described D A reference facial image as in is the facial image of the same person, and T is equal to the sum of D and M, and T, D, M are positive integer.
Referring to Fig. 2, in the single iteration of neural network training process, need to obtain with reference to face feature vector and The set of eigenvectors of sample face, with reference to the feature vector number and the weight vector number phase of the full articulamentum of facial image Together, the feature vector of sample face concentrates the corresponding face with reference in face of each sample face.
For example, in the application scenarios that certificate photo and living photo compare, the single iteration of neural network training process, It needs to obtain the feature vector of D certificate photos and the set of eigenvectors of M living photos, forms T set of eigenvectors.Wherein, it demonstrate,proves The number D that part shines is identical as the number of full articulamentum weight vector, and each living photo is opened in certificate photos with D in M living photo One certificate photo corresponds to the same face, and the total number of the feature vector of the feature vector and living photo of certificate photo is T.Specifically , can take D is 50000, M 1280000, that is, obtains the feature vector and 1280000 living photos of 50000 certificate photos Set of eigenvectors forms 1330000 feature vectors.
Step S200: the learning rate of the full articulamentum is set as zero.
Referring to Fig. 2, need the learning rate of full articulamentum to be set as zero in the single iteration of neural network training process, It is influenced with ensuring full articulamentum not by backpropagation in neural network training process.
For example, in the application scenarios that certificate photo and living photo compare, the single iteration of neural network training process In, the learning rate of the full articulamentum is set as zero to ensure full articulamentum not by the shadow of backpropagation in neural network training process It rings.
Step S300: t-th of feature vector in the T feature vector is input in neural network, by described The processing of convolutional layer obtains t-th of output feature vector xt, wherein t is the integer more than or equal to 1 and less than or equal to T, described T-th of feature vector is the feature vector of the reference facial image in the T feature vector or the feature of sample facial image Vector.
Referring to Fig. 2, T feature vector is sequentially sent to nerve in the single iteration of neural network training process In network, by the processing of convolutional layer, T output feature vector is successively obtained, wherein t-th of output feature vector is xt
For example, it in the application scenarios that certificate photo and living photo compare, needs 1330000 feature vectors successively It is sent in neural network, by the processing of convolutional layer, successively obtains 1330000 output feature vectors, wherein t-th defeated Feature vector is x outt
Step S400: t-th of output feature vector is normalized in full articulamentum, is obtained t-th Output feature vector after normalization
Referring to Fig. 2, needing to export feature vector by convolutional layer in the single iteration of neural network training process Output vector, be normalized in full articulamentum, the output feature vector after being normalized, wherein return for t-th Output feature vector after one change
For example, in the application scenarios that certificate photo and living photo compare, the feature by certificate photo and living photo is needed The output vector of certificate photo and living photo of the vector by convolutional layer output, and be normalized in full articulamentum, it obtains The output feature vector of certificate photo and living photo after to normalization, wherein the output feature vector after t-th of normalization
Step S500: judge whether t-th of feature vector is feature vector with reference to facial image, as the t When a feature vector is the feature vector with reference to facial image, according to the output feature vector after described t-th normalizationIt adjusts Weight matrix after the whole full articulamentum normalizationWeight matrix after the full articulamentum normalizationBy D Column vector groups At.
Referring to Fig. 2, in the single iteration of neural network training process, need according to the feature of reference facial image to Amount, to adjust the weight matrix of full articulamentum.
For example, in the application scenarios that certificate photo and living photo compare, the feature according to 50000 certificate photos is needed Vector, to adjust the weight matrix that 50000 weight vectors of full articulamentum form.
Optionally, step S500 includes: when t-th of feature vector is d-th of feature vector with reference to facial image When, by the output vector of the reference picture after t-th of normalizationReplace with d-th of weight vector of the full articulamentum;Accordingly Ground after all having executed above-mentioned steps with reference to face feature vector to described D, obtains being returned by D in the single iteration Weight matrix after the full articulamentum normalization of the output vector composition of reference picture after one changeIn neural metwork training mistake In the single iteration of journey, according to the feature vector of reference facial image, come adjust full articulamentum weight matrix method are as follows: work as institute When to state t-th of feature vector be d-th of feature vector with reference to facial image, by the defeated of the reference picture after t-th of normalization Outgoing vectorReplace with d-th of weight vector of the full articulamentum.After having executed single iteration, after full articulamentum normalization Weight matrixIt is made of the output vector of the reference picture after D normalization.For example, it is compared in certificate photo and living photo Application scenarios in, according to the feature vector of certificate photo, come adjust full articulamentum weight matrix method are as follows: by d-th of certificate D-th of weight vector that full articulamentum is replaced with according to the output vector after normalization, after having executed single iteration, full articulamentum is returned Weight matrix after one changeIt is made of the output vector of the certificate photo after D normalization.
Step S600: in the full articulamentum, according to the output vector after the normalizationWith the full articulamentum Weight matrix after normalizationObtain t-th of class vector ft
Referring to Fig. 2, in the single iteration of neural network training process, it, will be by convolutional layer in full articulamentum After the output vector that reason obtains is normalized, need according to the power after the output vector and normalization after the normalization Matrix obtains class vector.
For example, it in the application scenarios that certificate photo and living photo compare, changes in the single of neural network training process Dai Zhong, by after the output vector that convolutional layer is handled is normalized, is needed according to certificate in full articulamentum According to or living photo normalization after output vector, and normalization after weight matrix, obtain class vector.
Optionally, step S600 includes: by the sample facial image output vector after the normalizationWith the normalizing Weight matrix after changeIt is multiplied, obtains class vector ft, whereinCorrespondingly, in the single iteration, to institute It states after T feature vector all executed above-mentioned steps, obtains a class vectors of T.In the list of neural network training process In secondary iteration, t-th of class vector f is obtainedtMethod are as follows: by the facial image output vector after the normalizationWith it is described Weight matrix after normalizationIt is multiplied, in other words, class vectorFor example, in certificate photo and living photo ratio Pair application scenarios in, in the single iteration of neural network training process, obtain t-th of class vector ftMethod are as follows: will The output vector of certificate photo or living photo after the normalizationWith the weight matrix after the normalizationIt is multiplied.
Step S700: t-th of class vector is input in loss function module, judges the loss function module In the class vector quantity number whether reach NnIt is a, n, NnFor more than or equal to 1, integer less than or equal to T, n NnSummation Equal to T.
Referring to Fig. 2, needing the loss letter exported according to loss function in the single iteration of neural network training process Numerical value, optimizes the parameter of convolutional layer according to specified optimization algorithm, and each loss function value needs in loss function module It is obtained according to multiple class vectors by calculating, therefore when t-th of class vector is input in loss function module, needs to sentence Whether the class vector quantity number that breakdown loses in function module reaches the specified number for calculating loss function value.
For example, in the application scenarios that certificate photo and living photo compare, when certificate photo or the class vector of living photo When being input in loss function module, need to judge whether the class vector quantity number in loss function module reaches meter Calculate the specified number of loss function value.
Step S800: when the class vector quantity in the loss function module reaches NnWhen a, the loss letter Digital-to-analogue root tuber is according to the NnA class vector obtains n-th of loss function value, according to n-th of loss function value, according to finger Fixed optimization algorithm, optimizes the parameter of the convolutional layer, and empties the class vector of the loss function module.
Referring to Fig. 2, needing basis to be input to loss function module in the single iteration of neural network training process T class vector obtains multiple loss function values, wherein n-th of loss function value needs in loss function module according to Nn A class vector is obtained by calculating.After n-th of loss function value, need to optimize convolutional layer according to specified optimization algorithm Parameter, and the class vector of the loss function module is emptied, conveniently recalculate class vector in loss function module Number.After having executed an iteration, loss function module can export multiple loss function values, correspondingly, the parameter of convolutional layer Also it can be updated repeatedly.
For example, in the application scenarios that certificate photo and living photo compare, by 1330000 from the feature of certificate photo to The set of eigenvectors that amount and the feature vector of living photo form is input in neural network, and loss function module can be sequentially output more A loss function value, as soon as every output penalty values, the parameter of a convolutional layer is updated according to penalty values.
Optionally, step S800 includes: that the cosine measurement scaling ginseng in the loss function is arranged according to specified rule Another cosine in number s and loss function measures zooming parameter m, wherein m is more than or equal to 0 and is less than or equal to 1;According to the damage Lose the N in function modulenA class vectorWith following formula, n-th of penalty values L is obtainedn, wherein j successively takes 1 arrives NnInteger, NnTo calculate the class vector number that n-th of penalty values needs, correspondingly, njSuccessively takeIt arrivesInteger,
Wherein, described n-thjA class vectorWith yjIt is a to correspond to the same face with reference to facial image,For Class vectorIn yjA value, yjFor the integer for being less than or equal to D more than or equal to 1.For example, in certificate photo and living photo In the application scenarios of comparison, the cosine measurement zooming parameter s being arranged in loss function is 45, m 0.35.By living photo or certificate According to classification valueIt is input in the loss function module, j successively takes 1 to NnInteger, obtain n-th of penalty values Ln.Power Matrix W is made of 50000 weight vectors, wherein yjA weight vector is and the y in 50000 certificate photosjA certificate photo pair The weight vector for the full articulamentum answered.
Optionally, the specified optimization algorithm is stochastic gradient descent method, according to n-th of loss function value, according to finger Fixed optimization algorithm optimizes the parameter of the convolutional layer, comprising: according to preset condition, is arranged under the stochastic gradient The parameter of drop method;The parameter of the convolutional layer is obtained according to the stochastic gradient descent method according to n-th of penalty values, and Adjust the parameter of the convolutional layer.For example, in the application scenarios that certificate photo and living photo compare, boarding steps can be set The momentum parameter for spending descent method is 0.9, and the learning rate of convolutional layer is 0.1, according to n-th of penalty values, according to described random Gradient descent method obtains the n parameter of the convolutional layer, and the parameter for adjusting the neural network convolutional layer is n-th of ginseng Number.
Step S900: successively taken according to t 1 to T execute above-mentioned steps S100, step S200, step S300, step S400, Step S500, step S600, S700, step S800, obtain the target component of the convolutional layer.
Referring to Fig. 2, need successively to take t into 1 to T in the single iteration of neural network training process, it will be single with guarantee All samples that secondary iteration obtains, which are input in neural network, to be handled.
For example, in the application scenarios that certificate photo and living photo compare, t is got into T from 1, to guarantee to change single The feature vector of feature vector and living photo that all indentations that generation obtains shines is input in neural network.
Optionally, before step S900, further includes: initialize the parameter of the convolutional layer and the power of the full articulamentum Value.It for example,, can be by institute before carrying out first repetitive exercise in the application scenarios that certificate photo and living photo compare The weight of the parameter and the full articulamentum of stating convolutional layer is set as random value.
Optionally, after step S900, further includes: successive ignition is executed, until meeting pre-set iteration ends Condition, in any iteration twice of successive ignition, the T feature vector of acquisition is identical.For example, in certificate photo and In the application scenarios that living photo compares, need to carry out successive ignition, the feature of certificate photo and living photo that each iteration is used to Amount is identical.
Second embodiment
Face identification method provided in this embodiment, comprising: carried out at least by method as described in the first embodiment The convolutional layer that an iteration training obtains carries out feature to reference picture feature vector and sample image feature vector respectively It extracts, obtains fixed reference feature and sample characteristics;Calculate the similarity of the fixed reference feature and the sample characteristics;According to the ginseng It examines feature and the sample characteristics corresponds to the similarity of face, judge the corresponding face of the sample image feature vector and described The corresponding face of reference picture feature vector whether be same people face.For example, it is compared in certificate photo and living photo In application scenarios, according to the progress of method described in embodiment one convolutional layer that repetitive exercise obtains at least once to certificate According to feature vector and the feature vector of living photo carry out feature extraction respectively, calculate the certificate photo feature and the living photo The similarity of feature;The similarity that face is corresponded to according to the certificate photo feature and the living photo feature, judges the certificate According to the corresponding face of feature vector and the living photo the corresponding face of feature vector whether be same people face.Specifically , under conditions of millesimal rate of false alarm, when the method according to embodiment one be trained 120 it is small when iteration Training obtains the convolutional layer, and the convolutional layer by obtaining distinguishes the feature vector of certificate photo and the feature vector of living photo Feature extraction is carried out, finally judges that the feature vector of the corresponding face of the feature vector of the certificate photo and the living photo is corresponding Face whether be that the accuracy rate of face of same people can achieve 60% or more, and in the prior art, NormFace and The accuracy rate of CosFace is respectively 9.2% and 32.2%;Accordingly;Under a ten thousandth rate of false alarm scene, when according to implementation The repetitive exercise that method described in example one is trained 120 hours obtains the convolutional layer, and the convolutional layer pair by obtaining The feature vector of certificate photo and the feature vector of living photo carry out feature extraction respectively, finally judge the feature of the certificate photo to Whether the corresponding face of feature vector for measuring corresponding face and the living photo is that the accuracy rate of face of same people can reach To 40% or more, and in the prior art, the accuracy rate of NormFace and CosFace are respectively 2.6% and 19.3%.
In traditional life according to comparing in scene, repetitive exercise at least once is carried out according to method described in embodiment one and is obtained The convolutional layer feature extraction is carried out respectively to the feature vector of different living photos, calculate the similar of different living photo features Degree;The similarity that face is corresponded to according to different living photo features judges the corresponding face of different respective feature vectors of living photo Whether be same people face.Specifically, under one thousandth rate of false alarm scene, when the method according to embodiment one into Row 120 hours repetitive exercises of training obtain the convolutional layer, and the convolutional layer by obtaining to the features of different living photos to Amount carries out feature extraction respectively, finally judges whether the corresponding face of the respective feature vector of the different living photos is same people Face accuracy rate be 94.8%, the accuracy rate of NormFace and CosFace is respectively 85.0% He in the prior art 91.9%, under a ten thousandth rate of false alarm scene, when the method according to embodiment one be trained 120 it is small when iteration Training obtains the convolutional layer, and the convolutional layer by obtaining carries out feature extraction to the feature vector of different living photos respectively, Finally judge whether the corresponding face of the different respective feature vector of living photo is that the accuracy rate of face of same people is 92.2%, the accuracy rate of NormFace and CosFace is respectively 73.2% and 87.9% in the prior art.
3rd embodiment
The embodiment of the present application provides a kind of face identification system, referring to Fig. 3, including:
The convolutional layer that repetitive exercise obtains at least once, the convolutional layer are carried out by method as in the first embodiment For carrying out feature extraction respectively to reference picture feature vector and sample image feature vector, obtains fixed reference feature and sample is special Sign;
COS distance computing module, for calculating the similarity of the fixed reference feature and the sample characteristics;
Judgment module, for corresponding to the similarity of face according to the fixed reference feature and the sample characteristics, described in judgement The corresponding face of sample image feature vector and the corresponding face of the reference picture feature vector whether be same people face. The judgment module can be the reference picture feature vector and sample image feature vector that input is determined using preset threshold value Corresponding face whether be same face judgment module.
In the present embodiment, the convolution that repetitive exercise obtains at least once is carried out by method as in the first embodiment Layer carries out the fixed reference feature and sample characteristics that feature raises to reference picture feature vector and sample image feature vector respectively It can be good at expressing the feature of reference picture and the feature of sample image, to ensure COS distance computing module and judgment module The accuracy with higher of output result.
Fourth embodiment
The embodiment of the present application provides a kind of storage medium, and instruction is stored on the storage medium, when described instruction exists When being run on computer, so that the computer executes the face identification method in above-mentioned first embodiment and second embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that the present invention can lead to Hardware realization is crossed, the mode of necessary general hardware platform can also be added to realize by software, based on this understanding, this hair Bright technical solution can be embodied in the form of software products, which can store in a non-volatile memories In medium (can be CD-ROM, USB flash disk, mobile hard disk etc.), including some instructions are used so that computer equipment (can be with It is personal computer, server or the network equipment etc.) method that executes each implement scene of the present invention.
More than, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, and it is any to be familiar with Those skilled in the art within the technical scope of the present application, can easily think of the change or the replacement, and should all cover Within the protection scope of the application.Therefore, the protection scope of the application should be subject to the protection scope in claims.

Claims (10)

1. a kind of neural network training method, which is characterized in that the neural network includes sequentially connected convolutional layer, full connection Layer and loss function module, during single iteration training, which comprises
T feature vector is obtained, the T feature vector includes the D feature vector and M sample face with reference to facial image The feature vector of image, wherein the weight vector number of the feature vector number D with reference to facial image and the full articulamentum Identical, each sample facial image in the M sample facial image is with described D with reference to a reference in facial image Facial image is the facial image of the same person, and T is equal to the sum of D and M, and T, D, M are positive integer;
The learning rate of the full articulamentum is set as zero;
T-th of feature vector in the T feature vector is input in neural network, by the processing of the convolutional layer, Obtain t-th of output feature vector xt, wherein t is the integer more than or equal to 1 and less than or equal to T, t-th of feature vector It is the feature vector of the reference facial image in the T feature vector or the feature vector of sample facial image;
T-th of output feature vector is normalized in full articulamentum, the output after obtaining t-th of normalization Feature vector
Judge whether t-th of feature vector is feature vector with reference to facial image, when t-th of feature vector is ginseng When examining the feature vector of facial image, according to the output feature vector after described t-th normalizationAdjust the full articulamentum Weight matrix after normalizationWeight matrix after the full articulamentum normalizationBy D Column vector groups at;
In the full articulamentum, according to the output vector after the normalizationWith the power square after the full articulamentum normalization Battle arrayObtain t-th of class vector ft
T-th of class vector is input in loss function module, judges the classification in the loss function module Whether vector quantity number reaches NnIt is a, n, NnFor more than or equal to 1, integer less than or equal to T, n NnSummation is equal to T;
When the class vector quantity in the loss function module reaches NnWhen a, the loss function module is according to NnA class vector obtains n-th of loss function value, according to n-th of loss function value, according to specified optimization algorithm, Optimize the parameter of the convolutional layer, and empties the class vector of the loss function module;
After successively taking 1 to T to execute above steps according to t, the target component of the convolutional layer is obtained.
2. the method according to claim 1, wherein when t-th of feature vector is with reference to facial image When feature vector, according to the output feature vector after described t-th normalizationPower after adjusting the full articulamentum normalization MatrixInclude:
When t-th of feature vector is the feature vector of d-th of reference facial image, by the reference after t-th of normalization The output vector of imageReplace with d-th of weight vector of the full articulamentum;
Correspondingly, it in the single iteration, after all having executed above-mentioned steps with reference to face feature vector to described D, obtains Weight matrix after the full articulamentum normalization formed to the output vector by the reference picture after D normalization
3. the method according to claim 1, wherein in the full articulamentum, after the normalization Output vectorWith the weight matrix after the full articulamentum normalizationObtain t-th of class vector ft, which comprises
By the output vector after the normalizationWith the weight matrix after the normalizationIt is multiplied, obtains class vector ft, In,
Correspondingly, it in the single iteration, after all having executed above-mentioned steps to the T feature vector, obtains described in T Class vector.
4. according to the method described in claim 3, it is characterized in that, the loss function module is according to the NnA classification Vector obtains n-th of loss function value, comprising:
According to specified rule, the cosine measurement zooming parameter s in the loss function and another remaining in loss function is set String measures zooming parameter m, wherein m is more than or equal to 0 and is less than or equal to 1;
According to the N in the loss function modulenA class vectorWith following formula, n-th of penalty values is obtained Ln, wherein j successively takes 1 to NnInteger, NnTo calculate the class vector number that n-th of penalty values needs, correspondingly, njSuccessively It takesIt arrivesInteger,
Wherein, described n-thjA class vectorWith yjIt is a to correspond to the same face with reference to facial image,For classification to AmountIn yjA value, yjFor the integer for being less than or equal to D more than or equal to 1.
5. according to the method described in claim 4, it is characterized in that, the specified optimization algorithm be stochastic gradient descent method, Optimize the parameter of the convolutional layer according to specified optimization algorithm according to n-th of loss function value, comprising:
According to preset condition, the parameter of the stochastic gradient descent method is set;
The parameter of the convolutional layer is obtained according to the stochastic gradient descent method according to n-th of penalty values, and adjusts institute State the parameter of convolutional layer.
6. the method according to claim 1, wherein the method is also wrapped before carrying out first repetitive exercise It includes:
Initialize the parameter of the convolutional layer and the weight of the full articulamentum.
7. the method according to claim 1, wherein the method also includes:
Successive ignition is executed, until meeting pre-set stopping criterion for iteration, in any iteration twice of successive ignition, is obtained The T feature vector taken is identical.
8. a kind of method of recognition of face characterized by comprising
The convolutional layer that repetitive exercise obtains at least once is carried out by method such as of any of claims 1-7 Feature extraction is carried out to reference picture feature vector and sample image feature vector respectively, obtains fixed reference feature and sample characteristics;
Calculate the similarity of the fixed reference feature and the sample characteristics;
The similarity that face is corresponded to according to the fixed reference feature and the sample characteristics judges the sample image feature vector pair The face and the corresponding face of the reference picture feature vector answered whether be same people face.
9. a kind of face identification system characterized by comprising
The convolutional layer that repetitive exercise obtains at least once is carried out by method such as of any of claims 1-7, it is described Convolutional layer for carrying out feature extraction respectively to reference picture feature vector and sample image feature vector, obtain fixed reference feature and Sample characteristics;
COS distance computing module, for calculating the similarity of the fixed reference feature and the sample characteristics;
Judgment module judges the sample for corresponding to the similarity of face according to the fixed reference feature and the sample characteristics The corresponding face of image feature vector and the corresponding face of the reference picture feature vector whether be same people face.
10. a kind of storage medium, which is characterized in that instruction is stored on the storage medium, when described instruction on computers When operation, so that the computer executes the method as described in any one of claims 1 to 7.
CN201811630038.1A 2018-12-28 2018-12-28 Neural network training method, face recognition system and storage medium Active CN109711358B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811630038.1A CN109711358B (en) 2018-12-28 2018-12-28 Neural network training method, face recognition system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811630038.1A CN109711358B (en) 2018-12-28 2018-12-28 Neural network training method, face recognition system and storage medium

Publications (2)

Publication Number Publication Date
CN109711358A true CN109711358A (en) 2019-05-03
CN109711358B CN109711358B (en) 2020-09-04

Family

ID=66259348

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811630038.1A Active CN109711358B (en) 2018-12-28 2018-12-28 Neural network training method, face recognition system and storage medium

Country Status (1)

Country Link
CN (1) CN109711358B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378092A (en) * 2019-07-26 2019-10-25 北京积加科技有限公司 Identification system and client, server and method
CN110610140A (en) * 2019-08-23 2019-12-24 平安科技(深圳)有限公司 Training method, device and equipment of face recognition model and readable storage medium
CN111242162A (en) * 2019-12-27 2020-06-05 北京地平线机器人技术研发有限公司 Training method and device of image classification model, medium and electronic equipment
CN112509154A (en) * 2020-11-26 2021-03-16 北京达佳互联信息技术有限公司 Training method of image generation model, image generation method and device
CN112862096A (en) * 2021-02-04 2021-05-28 百果园技术(新加坡)有限公司 Model training and data processing method, device, equipment and medium
CN112906676A (en) * 2021-05-06 2021-06-04 北京远鉴信息技术有限公司 Face image source identification method and device, storage medium and electronic equipment
CN112949672A (en) * 2019-12-11 2021-06-11 顺丰科技有限公司 Commodity identification method, commodity identification device, commodity identification equipment and computer readable storage medium
CN113269010A (en) * 2020-02-14 2021-08-17 深圳云天励飞技术有限公司 Training method and related device for human face living body detection model
CN113554145A (en) * 2020-04-26 2021-10-26 伊姆西Ip控股有限责任公司 Method, electronic device and computer program product for determining output of neural network
WO2023283805A1 (en) * 2021-07-13 2023-01-19 深圳大学 Face image clustering method, apparatus and device, and computer-readable storage medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850825A (en) * 2015-04-18 2015-08-19 中国计量学院 Facial image face score calculating method based on convolutional neural network
CN105138993A (en) * 2015-08-31 2015-12-09 小米科技有限责任公司 Method and device for building face recognition model
CN106780906A (en) * 2016-12-28 2017-05-31 北京品恩科技股份有限公司 A kind of testimony of a witness unification recognition methods and system based on depth convolutional neural networks
CN106951825A (en) * 2017-02-13 2017-07-14 北京飞搜科技有限公司 A kind of quality of human face image assessment system and implementation method
CN106991474A (en) * 2017-03-28 2017-07-28 华中科技大学 The parallel full articulamentum method for interchanging data of deep neural network model and system
CN107609634A (en) * 2017-08-21 2018-01-19 哈尔滨工程大学 A kind of convolutional neural networks training method based on the very fast study of enhancing
WO2018022821A1 (en) * 2016-07-29 2018-02-01 Arizona Board Of Regents On Behalf Of Arizona State University Memory compression in a deep neural network
CN107871106A (en) * 2016-09-26 2018-04-03 北京眼神科技有限公司 Face detection method and device
CN107886064A (en) * 2017-11-06 2018-04-06 安徽大学 A kind of method that recognition of face scene based on convolutional neural networks adapts to
CN107944399A (en) * 2017-11-28 2018-04-20 广州大学 A kind of pedestrian's recognition methods again based on convolutional neural networks target's center model
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium
CN108009528A (en) * 2017-12-26 2018-05-08 广州广电运通金融电子股份有限公司 Face authentication method, device, computer equipment and storage medium based on Triplet Loss
CN108009625A (en) * 2016-11-01 2018-05-08 北京深鉴科技有限公司 Method for trimming and device after artificial neural network fixed point
CN108090565A (en) * 2018-01-16 2018-05-29 电子科技大学 Accelerated method is trained in a kind of convolutional neural networks parallelization
CN108229298A (en) * 2017-09-30 2018-06-29 北京市商汤科技开发有限公司 The training of neural network and face identification method and device, equipment, storage medium
CN108416343A (en) * 2018-06-14 2018-08-17 四川远鉴科技有限公司 A kind of facial image recognition method and device

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850825A (en) * 2015-04-18 2015-08-19 中国计量学院 Facial image face score calculating method based on convolutional neural network
CN105138993A (en) * 2015-08-31 2015-12-09 小米科技有限责任公司 Method and device for building face recognition model
WO2018022821A1 (en) * 2016-07-29 2018-02-01 Arizona Board Of Regents On Behalf Of Arizona State University Memory compression in a deep neural network
CN107871106A (en) * 2016-09-26 2018-04-03 北京眼神科技有限公司 Face detection method and device
CN108009625A (en) * 2016-11-01 2018-05-08 北京深鉴科技有限公司 Method for trimming and device after artificial neural network fixed point
CN106780906A (en) * 2016-12-28 2017-05-31 北京品恩科技股份有限公司 A kind of testimony of a witness unification recognition methods and system based on depth convolutional neural networks
CN106951825A (en) * 2017-02-13 2017-07-14 北京飞搜科技有限公司 A kind of quality of human face image assessment system and implementation method
CN106991474A (en) * 2017-03-28 2017-07-28 华中科技大学 The parallel full articulamentum method for interchanging data of deep neural network model and system
CN107609634A (en) * 2017-08-21 2018-01-19 哈尔滨工程大学 A kind of convolutional neural networks training method based on the very fast study of enhancing
CN108229298A (en) * 2017-09-30 2018-06-29 北京市商汤科技开发有限公司 The training of neural network and face identification method and device, equipment, storage medium
CN107886064A (en) * 2017-11-06 2018-04-06 安徽大学 A kind of method that recognition of face scene based on convolutional neural networks adapts to
CN107944399A (en) * 2017-11-28 2018-04-20 广州大学 A kind of pedestrian's recognition methods again based on convolutional neural networks target's center model
CN107992842A (en) * 2017-12-13 2018-05-04 深圳云天励飞技术有限公司 Biopsy method, computer installation and computer-readable recording medium
CN108009528A (en) * 2017-12-26 2018-05-08 广州广电运通金融电子股份有限公司 Face authentication method, device, computer equipment and storage medium based on Triplet Loss
CN108090565A (en) * 2018-01-16 2018-05-29 电子科技大学 Accelerated method is trained in a kind of convolutional neural networks parallelization
CN108416343A (en) * 2018-06-14 2018-08-17 四川远鉴科技有限公司 A kind of facial image recognition method and device

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110378092B (en) * 2019-07-26 2020-12-04 北京积加科技有限公司 Identity recognition system, client, server and method
CN110378092A (en) * 2019-07-26 2019-10-25 北京积加科技有限公司 Identification system and client, server and method
CN110610140A (en) * 2019-08-23 2019-12-24 平安科技(深圳)有限公司 Training method, device and equipment of face recognition model and readable storage medium
CN110610140B (en) * 2019-08-23 2024-01-19 平安科技(深圳)有限公司 Training method, device and equipment of face recognition model and readable storage medium
CN112949672A (en) * 2019-12-11 2021-06-11 顺丰科技有限公司 Commodity identification method, commodity identification device, commodity identification equipment and computer readable storage medium
CN111242162A (en) * 2019-12-27 2020-06-05 北京地平线机器人技术研发有限公司 Training method and device of image classification model, medium and electronic equipment
CN113269010A (en) * 2020-02-14 2021-08-17 深圳云天励飞技术有限公司 Training method and related device for human face living body detection model
CN113269010B (en) * 2020-02-14 2024-03-26 深圳云天励飞技术有限公司 Training method and related device for human face living body detection model
CN113554145A (en) * 2020-04-26 2021-10-26 伊姆西Ip控股有限责任公司 Method, electronic device and computer program product for determining output of neural network
CN113554145B (en) * 2020-04-26 2024-03-29 伊姆西Ip控股有限责任公司 Method, electronic device and computer program product for determining output of neural network
CN112509154B (en) * 2020-11-26 2024-03-22 北京达佳互联信息技术有限公司 Training method of image generation model, image generation method and device
CN112509154A (en) * 2020-11-26 2021-03-16 北京达佳互联信息技术有限公司 Training method of image generation model, image generation method and device
CN112862096A (en) * 2021-02-04 2021-05-28 百果园技术(新加坡)有限公司 Model training and data processing method, device, equipment and medium
CN112906676A (en) * 2021-05-06 2021-06-04 北京远鉴信息技术有限公司 Face image source identification method and device, storage medium and electronic equipment
WO2023283805A1 (en) * 2021-07-13 2023-01-19 深圳大学 Face image clustering method, apparatus and device, and computer-readable storage medium

Also Published As

Publication number Publication date
CN109711358B (en) 2020-09-04

Similar Documents

Publication Publication Date Title
CN109711358A (en) Neural network training method, face identification method and system and storage medium
CN106372581B (en) Method for constructing and training face recognition feature extraction network
CN105138993B (en) Establish the method and device of human face recognition model
CN110532884B (en) Pedestrian re-recognition method, device and computer readable storage medium
CN109344731B (en) Lightweight face recognition method based on neural network
CN110532920A (en) Smallest number data set face identification method based on FaceNet method
CN109255289B (en) Cross-aging face recognition method based on unified generation model
CN107103281A (en) Face identification method based on aggregation Damage degree metric learning
CN106407911A (en) Image-based eyeglass recognition method and device
CN108573243A (en) A kind of comparison method of the low quality face based on depth convolutional neural networks
JP3976056B2 (en) Coefficient determination method, feature extraction method, system and program, and pattern matching method, system and program
CN107679539B (en) Single convolution neural network local information and global information integration method based on local perception field
KR102483650B1 (en) User verification device and method
CN107194437B (en) Image classification method based on Gist feature extraction and concept machine recurrent neural network
CN113723238B (en) Face lightweight network model construction method and face recognition method
CN113139462A (en) Unsupervised face image quality evaluation method, electronic device and storage medium
CN111652138B (en) Face recognition method, device and equipment for wearing mask and storage medium
CN112966643A (en) Face and iris fusion recognition method and device based on self-adaptive weighting
CN107016359A (en) A kind of fast face recognition method being distributed under complex environment based on t
CN114547365A (en) Image retrieval method and device
CN114627424A (en) Gait recognition method and system based on visual angle transformation
CN111429414B (en) Artificial intelligence-based focus image sample determination method and related device
CN113011307A (en) Face recognition identity authentication method based on deep residual error network
CN111062338B (en) License and portrait consistency comparison method and system
KR20200140571A (en) Method and device for data recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 80001-2, floor 7, building 1, No. 158, West Fourth Ring North Road, Haidian District, Beijing 100097

Applicant after: Beijing Yuanjian Information Technology Co.,Ltd.

Address before: 615000 3 people's West Road, new town, Zhaojue County, Liangshan Yi Autonomous Prefecture, Sichuan 1-1

Applicant before: SICHUAN YUANJIAN TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190503

Assignee: RUN TECHNOLOGIES Co.,Ltd. BEIJING

Assignor: Beijing Yuanjian Information Technology Co.,Ltd.

Contract record no.: X2022990000639

Denomination of invention: Neural network training method, face recognition method and system and storage medium

Granted publication date: 20200904

License type: Common License

Record date: 20220913