CN108009528A - Face authentication method, device, computer equipment and storage medium based on Triplet Loss - Google Patents

Face authentication method, device, computer equipment and storage medium based on Triplet Loss Download PDF

Info

Publication number
CN108009528A
CN108009528A CN201711436879.4A CN201711436879A CN108009528A CN 108009528 A CN108009528 A CN 108009528A CN 201711436879 A CN201711436879 A CN 201711436879A CN 108009528 A CN108009528 A CN 108009528A
Authority
CN
China
Prior art keywords
sample
facial image
scene
training
certificate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711436879.4A
Other languages
Chinese (zh)
Other versions
CN108009528B (en
Inventor
许丹丹
梁添才
章烈剽
龚文川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GRG Banking Equipment Co Ltd
Guangdian Yuntong Financial Electronic Co Ltd
Original Assignee
Guangdian Yuntong Financial Electronic Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdian Yuntong Financial Electronic Co Ltd filed Critical Guangdian Yuntong Financial Electronic Co Ltd
Priority to CN201711436879.4A priority Critical patent/CN108009528B/en
Publication of CN108009528A publication Critical patent/CN108009528A/en
Priority to PCT/CN2018/109169 priority patent/WO2019128367A1/en
Application granted granted Critical
Publication of CN108009528B publication Critical patent/CN108009528B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The present invention relates to a kind of face authentication method based on Triplet Loss, device, computer equipment and storage medium, this method includes:Asked based on face authentication, obtain certificate photograph and the scene photo of personage;Carry out Face datection, crucial point location and image preprocessing respectively to scene photo and certificate photograph, obtain the corresponding scene facial image of scene photo, and the corresponding certificate facial image of certificate photograph;Scene facial image and certificate facial image are input to the advance trained convolutional neural networks model for face authentication, and the corresponding first eigenvector of scene facial image of convolutional neural networks model output is obtained, and the corresponding second feature vector of certificate facial image;Calculate the COS distance of first eigenvector and second feature vector;Compare COS distance and predetermined threshold value, and face authentication result is determined according to comparative result.It the method increase the reliability of face authentication.

Description

Face authentication method, device, computer equipment and storage based on Triplet Loss Medium
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of face authentication side based on Triplet Loss Method, device, computer equipment and storage medium.
Background technology
Face authentication, refers to contrast the certificate photograph in the personage's scene photo and identity information of collection in worksite, judges Whether it is same person.The key technology of face authentication is recognition of face.
With the rise of depth learning technology, the relevant issues of recognition of face constantly break through traditional technical bottleneck, performance Level is highly improved.In the research work of recognition of face is solved the problems, such as with deep learning, mainly there are two to send mainstream Method:Method based on classification learning and the method based on metric learning.Wherein, the method based on classification learning mainly exists Depth convolutional network extraction feature after calculate sample Classification Loss function (such as softmax loss, center loss and Related variants) network is optimized, last layer of network is the full articulamentum for classification, the quantity of its output node is past Toward to be consistent with total classification number of training dataset, such method is more suitable for training sample, especially same category Training sample than more rich situation, network can obtain preferable training effect and generalization ability.But when classification number reaches During hundreds thousand of or higher amount level, the last classification layer of network (full articulamentum) parameter amount can linearly increase and quite huge, Network is caused to be difficult to train.
Another kind of method is the method based on metric learning, this method tissue training's sample (such as two in a manner of tuple Tuple pair or triple triplet), need not be by layer of classifying after depth convolutional network, and it is based on directly on convolution The measurement loss (such as contrastive loss, triplet loss etc.) that feature vector is calculated between sample to carry out network Optimization, this method need not train classification layer, therefore the influence that network parameter amount increases from classification number, to training dataset Classification number is not limited, it is only necessary to chooses the similar or suitable tuple of foreign peoples's sample architecture according to corresponding strategy.Compared to classification Learning method, metric learning method is more suitable for that training data range is larger but depth is insufficient, and (sample class number is more, but similar sample Originally situation less), by the various combination between sample, can construct quite abundant tuple data and be used to train, spend at the same time Amount mode of learning focuses more on tuple internal relations, for 1:This kind of judgement of 1 face verification is that problem has that its is congenital with no Advantage.
In practical applications, many mechanisms require that system of real name is registered, for example, open a bank account, phone number registration, gold Melt account to open an account etc..System of real name register request user carry identity card arrive the place specified, by staff verification in person with After the photo of identity card corresponds to, can open an account success.And with Internet technology develop, more and more mechanisms are proposed just The people service, and are no longer strictly required client to specified site.The geographical location of user is unrestricted, uploads identity card, and utilize shifting Personage's scene photo at the image acquisition device scene of dynamic terminal, carries out face authentication, and lead in face authentication by system Later, you can success of opening an account.And the learning method of measurement is traditionally based on, measured using Euclidean distance similar between sample Degree, and Euclidean distance weigh be spatial points absolute distance, directly related with the position coordinates where each point, this is not Meet the properties of distributions in face characteristic space, cause the reliability of recognition of face relatively low.
The content of the invention
Based on this, it is necessary to for traditional face authentication method reliability it is low the problem of, there is provided one kind is based on Triplet Face authentication method, device, computer equipment and the storage medium of Loss.
A kind of face authentication method based on Triplet Loss, including:
Asked based on face authentication, obtain certificate photograph and the scene photo of personage;
Carry out Face datection, crucial point location and image preprocessing respectively to the scene photo and the certificate photograph, Obtain the corresponding scene facial image of the scene photo, and the corresponding certificate facial image of the certificate photograph;
The scene facial image and certificate facial image are input to the advance trained convolution for face authentication Neural network model, and obtain the corresponding fisrt feature of the scene facial image of convolutional neural networks model output to Amount, and the corresponding second feature vector of the certificate facial image;Wherein, the convolutional neural networks model is based on triple The supervised training of loss function obtains;
Calculate the COS distance of the first eigenvector and second feature vector;
Compare the COS distance and predetermined threshold value, and face authentication result is determined according to comparative result.
In one embodiment, the method further includes:
The training sample of tape label is obtained, the training sample includes marked a certificate for belonging to each tagged object Facial image and at least a scene facial image;
According to the training sample training convolutional neural networks module, the corresponding ternary of each training sample is produced by OHEM Constituent element element;The triple element includes reference sample, positive sample and negative sample;
According to the triple element of each training sample, based on the supervision of triple loss function, the training convolutional Neural Network model;The triple loss function, using COS distance as metric form, optimizes mould by stochastic gradient descent algorithm Shape parameter;
Verification collection data are inputted into the convolutional neural networks model, when reaching trained termination condition, are obtained trained Convolutional neural networks model for face authentication.
In another embodiment, according to the training sample training convolutional neural networks model, produced by OHEM each The step of training sample corresponding triple element, including:
One image of random selection selects to belong to same label object, different from reference sample classification as sample refer to Image as positive sample;
It is right using the COS distance between currently trained convolutional neural networks model extraction feature according to OHEM strategies In each reference sample, it is not belonging to from other in the image of the label object, chosen distance minimum and the reference sample The image to belong to a different category, the negative sample as the reference sample.
In another embodiment, the triple loss function includes the restriction to the COS distance of similar sample, with And the restriction of the COS distance to foreign peoples's sample.
In another embodiment, the triple loss function is:
Wherein, cos () represents COS distance, its calculation isN is triple quantity,Represent the feature vector of reference sample,Represent the feature vector of similar positive sample,Represent foreign peoples's negative sample Feature vector, []+Implication it is as follows:α1The spacing parameter between class, α2To be spaced ginseng in class Number.
In another embodiment, the method further includes:Increase income the trained basis of human face data using based on magnanimity Model parameter is initialized, and addition normalization layer and triple loss function layer, obtain to be trained after feature output layer Convolutional neural networks model.
A kind of face authentication device based on Triplet Loss, including:Image collection module, image pre-processing module, Feature acquisition module, computing module and authentication module;
Described image acquisition module, for being asked based on face authentication, obtains certificate photograph and the scene photo of personage;
Described image pretreatment module, for the scene photo and the certificate photograph are carried out respectively Face datection, Crucial point location and image preprocessing, obtain the corresponding scene facial image of the scene photo, and the certificate photograph pair The certificate facial image answered;
The feature acquisition module, trains in advance for the scene facial image and certificate facial image to be input to The convolutional neural networks model for face authentication, and obtain the scene face of convolutional neural networks model output The corresponding first eigenvector of image, and the corresponding second feature vector of the certificate facial image;Wherein, the convolution god Through network model, the supervised training based on triple loss function obtains;
The computing module, for calculating the COS distance of the first eigenvector and second feature vector;
The authentication module, determines that face is recognized for the COS distance and predetermined threshold value, and according to comparative result Demonstrate,prove result.
In another embodiment, described device further includes:Sample acquisition module, triple acquisition module, training module And authentication module;
The sample acquisition module, for obtaining the training sample of tape label, the training sample includes marked belonging to A certificate facial image and an at least scene facial image for each tagged object;
The triple acquisition module, for according to the training sample training convolutional neural networks model, passing through OHEM Produce the corresponding triple element of each training sample;The triple element includes reference sample, positive sample and negative sample;
The training module, for the triple element according to each training sample, the supervision of base triple loss function, instruction Practice the convolutional neural networks model;The triple loss function, using COS distance as metric form, by under stochastic gradient Drop algorithm carrys out Optimized model parameter;
The authentication module, for verification collection data to be inputted the convolutional neural networks model, reaches training and terminates bar During part, the trained convolutional neural networks model for face authentication is obtained.
A kind of computer equipment, including memory, processor and storage can be run on a memory and on a processor Computer program, the processor realize the above-mentioned face authentication based on Triplet Loss when performing the computer program The step of method.
A kind of storage medium, is stored thereon with computer program, it is characterised in that the computer program is executed by processor When, the step of realizing above-mentioned face authentication method based on Triplet Loss.
Face authentication method based on Triplet Loss, device, computer equipment and storage medium of the present invention, Face authentication is carried out using convolutional neural networks trained in advance, since convolutional neural networks model is based on triple loss function Supervised training obtain, and the similarity of scene facial image and certificate facial image is according to scene facial image corresponding first The COS distance of feature vector and the corresponding second feature vector of certificate facial image is calculated, and what COS distance was weighed is empty Between vectorial angle, the difference being more embodied on direction, so as to more meet the properties of distributions in face characteristic space, improves people The reliability of face certification.
Brief description of the drawings
Fig. 1 is the structure diagram of the face authentication system based on Triplet Loss of one embodiment;
Fig. 2 is the flow chart of the face authentication method based on Triplet Loss in one embodiment;
Fig. 3 is the flow for the step of training obtains the convolutional neural networks model for face authentication in one embodiment Figure;
Fig. 4 be spaced between class consistent, variance within clusters it is larger in the case of, the probability schematic diagram that wrong point of sample;
Fig. 5 be spaced between class consistent, variance within clusters it is smaller in the case of, the probability schematic diagram that wrong point of sample;
Fig. 6 is the schematic diagram of the transfer learning process of the face authentication based on Triplet Loss in one embodiment;
Fig. 7 is the structure diagram for the convolutional neural networks model for being used for face authentication in one embodiment;
Fig. 8 is the flow diagram of the face authentication method based on Triplet Loss in one embodiment;
Fig. 9 is the structure diagram of the face authentication device based on Triplet Loss in one embodiment;
Figure 10 is the structure diagram of the face authentication device based on Triplet Loss in another embodiment.
Embodiment
Fig. 1 is the structure diagram of the face authentication system based on Triplet Loss of one embodiment.Such as Fig. 1 institutes Show, face authentication system includes server 101 and image collecting device 102.Wherein, server 101 and image collecting device 102 Network connection.Image collecting device 102 gathers the real-time scene photo of user to be certified, and certificate photograph, and by collection Real-time scene photo and certificate photograph are sent to server 101.Server 101 is judged in the personage and certificate photo of scene photo Whether personage is same people, and the identity of user to be certified is authenticated.Based on specific application scenarios, image collecting device 102 can be camera, or the user terminal with camera function.Exemplified by the scene of opening an account, image collecting device 102 can Think camera;Exemplified by financial account is carried out by internet and is opened an account, image collecting device 102 can be with camera function Mobile terminal.
In other embodiments, face authentication system can also include card reader, for reading certificate (such as identity card Deng) certificate photo in chip.
Fig. 2 is the flow chart of the face authentication method based on Triplet Loss in one embodiment.As shown in Fig. 2, should Method includes:
S202, is asked based on face authentication, obtains certificate photograph and the scene photo of personage.
Wherein, certificate photograph refers to be able to demonstrate that the photo corresponding to the certificate of piece identity, such as is printed on identity card Certificate photo in the certificate photo or chip of system.The acquisition modes of certificate photograph can use and carry out acquisition of taking pictures to certificate, also may be used With the certificate photograph stored by card reader reading certificate chip.Certificate in the present embodiment can be identity card, driver's license Or social security card etc..
The scene photo of personage refers to that user to be certified is gathered in certification, the photograph of the user to be certified environment at the scene Piece.Site environment refers to local environment of the user when taking pictures, and site environment is unrestricted.The acquisition modes of scene photo can be with Using the mobile terminal collection scene photo with camera function and to send to server.
Face authentication, refers to contrast the certificate photograph in the personage's scene photo and identity information of collection in worksite, judges Whether it is same person.Face authentication request is triggered based on actual application operating, for example, the account opening request based on user, is touched Send out face authentication request.Application program carries out the acquisition operations of photo in the display interface prompting user of user terminal, and is shining After the completion of piece collection, the photo of collection is sent to server, carries out face authentication.
S204, carries out scene photo and certificate photograph Face datection, crucial point location and image preprocessing, obtains respectively The corresponding scene facial image of scene photo, and the corresponding certificate facial image of certificate photograph.
Face datection refers to identify photo and obtains the human face region in photo.
Crucial point location, refers to the human face region to being detected in photo, obtains position of the face key point in every photos Put.Face key point includes eyes, nose, corners of the mouth point, eyebrow and each component outline point of face.
In the present embodiment, the concatenated convolutional neutral net MTCNN methods based on multitask combination learning can be used complete at the same time Into Face datection and face critical point detection, can also be returned using the method for detecting human face based on LBP features and based on shape Face critical point detection method.
Image preprocessing refers to the position according to the face key point of detection in every pictures, carry out portrait alignment and Shear treatment, so as to obtain the normalized scene facial image of size and certificate facial image.Wherein, scene facial image refers to The facial image obtained after Face datection, crucial point location and image preprocessing is carried out to scene photo, certificate facial image is Refer to the facial image for carrying out being obtained after Face datection, crucial point location and image preprocessing to certificate photograph.
S206, the advance trained convolution for face authentication is input to by scene facial image and certificate facial image Neural network model, and the corresponding first eigenvector of scene facial image of convolutional neural networks model output is obtained, and The corresponding second feature vector of certificate facial image.
Wherein, supervision of the convolutional neural networks model based on triple loss function is trained in advance previously according to training sample Alright.Convolutional neural networks include convolutional layer, pond layer, activation primitive layer and full articulamentum, every layer of each neuron parameter Determined by training.Using trained convolutional neural networks, by network propagated forward, convolutional neural networks model is obtained The first eigenvector of the scene facial image of full articulamentum output, and the corresponding second feature vector of certificate facial image.
Triple (triplet) refers to concentrate from training data selects a sample at random, which is known as reference sample, so A sample for belonging to same people with reference sample is randomly selected again afterwards as positive sample, chooses the sample work for being not belonging to same people For negative sample, (reference sample, positive sample, negative sample) triple is thus formed.Card is mainly based upon since the testimony of a witness compares Part is according to the comparison shone with scene, rather than certificate photo and certificate photo or scene be according to the comparison shone with scene, therefore triple Pattern mainly has two kinds of combinations:During using certificate photo image as reference sample, positive sample and negative sample are that scene is shone;Shone with scene When image is reference sample, positive sample and negative sample are certificate photo.
For each sample in triple, the network of one parameter sharing of training, obtains the feature representation of three elements. The purpose for improving triple loss (triplet loss) be exactly by learning, allow reference sample and the feature representation of positive sample it Between distance it is as small as possible, and the distance between feature representation of reference sample and negative sample is as big as possible, and to allow reference Have between the distance between feature representation of the distance between sample and the feature representation of positive sample and reference sample and negative sample One minimum interval.
S208, calculates the COS distance of first eigenvector and second feature vector.
COS distance, also referred to as cosine similarity, are to be used as measurement by the use of two vectorial angle cosine values in vector space The measurement of the size of two inter-individual differences.The COS distance of first eigenvector and second feature vector is bigger, represents scene The similarity of facial image and certificate facial image is bigger, and the COS distance of first eigenvector and second feature vector is smaller, Represent that the similarity of scene facial image and certificate facial image is smaller.When scene facial image and the cosine of certificate facial image When distance is more received in 1, the probability that two images belong to same people is bigger, more than scene facial image and certificate facial image Chordal distance is smaller, and the probability that two images belong to same people is smaller.
In traditional triple loss (triplet loss) method, measured using Euclidean distance similar between sample Degree.And Euclidean distance weigh be spatial points absolute distance, directly related with the position coordinates where each point, this is not Meet the properties of distributions in face characteristic space.In the present embodiment, properties of distributions and the practical application field in face characteristic space are considered Scape, the similarity between sample is measured using COS distance.What COS distance was weighed is the angle of space vector, is more embodied Difference on direction, rather than position, so as to more meet the properties of distributions in face characteristic space.
Specifically, the calculation formula of COS distance is:
Wherein, x represents first eigenvector, and y represents second feature vector.
S210, compares COS distance and predetermined threshold value, and determines face authentication result according to comparative result.
Authentication result includes certification by the way that i.e. certificate photograph and scene photo belongs to same people.Authentication result, which further includes, recognizes Card failure, i.e. certificate photograph and scene photo are not belonging to same people.
Specifically, by COS distance compared with predetermined threshold value, when COS distance is more than predetermined threshold value, represent to demonstrate,prove The similarity of part photo and scene photo is more than predetermined threshold value, and certification success, when COS distance is less than predetermined threshold value, expression is The similarity of certificate photograph and scene photo is less than predetermined threshold value, authentification failure.
The above-mentioned face authentication method based on Triplet Loss, using convolutional neural networks trained in advance into pedestrian Face certification, since supervised training of the convolutional neural networks model based on triple loss function obtains, and scene facial image and The similarity of certificate facial image is according to the corresponding first eigenvector of scene facial image and certificate facial image corresponding The COS distance of two feature vectors is calculated, and what COS distance was weighed is the angle of space vector, is more embodied on direction Difference, so as to more meet the properties of distributions in face characteristic space, improve the reliability of face authentication.
In another embodiment, face authentication method further includes training and obtains the convolutional neural networks for face authentication The step of model.Fig. 3 is the stream for the step of training obtains the convolutional neural networks model for face authentication in one embodiment Cheng Tu.As shown in figure 3, the step includes:
S302, obtains the training sample of tape label, and training sample includes marked one that belongs to each tagged object card Part facial image and at least a scene facial image.
In the present embodiment, tagged object, that is, people, training sample marked the scene people for belonging to a people taking human as unit Face image and certificate facial image.Specifically, scene facial image and certificate facial image can be by shining the scene of tape label Piece and certificate photograph carry out Face datection, crucial point location and image preprocessing and obtain.
Face datection refers to identify photo and obtains the human face region in photo.
Crucial point location, refers to the human face region to being detected in photo, obtains position of the face key point in every photos Put.Face key point includes eyes, nose, corners of the mouth point, eyebrow and each component outline point of face.
In the present embodiment, the concatenated convolutional neutral net MTCNN methods based on multitask combination learning can be used complete at the same time It is crucial into Face datection and face, it can also use the method for detecting human face based on LBP features and the face returned based on shape to close Key point detecting method.
Image preprocessing refers to the position according to the face key point of detection in every pictures, carry out portrait alignment and Shear treatment, so as to obtain size normalization scene facial image and certificate facial image.Wherein, scene facial image refers to pair Scene photo carries out the facial image obtained after Face datection, crucial point location and image preprocessing, and certificate facial image refers to The facial image obtained after Face datection, crucial point location and image preprocessing is carried out to certificate photograph.
S304, according to training sample training convolutional neural networks model, each training sample corresponding three is produced by OHEM Tuple elements;Triple element includes reference sample, positive sample and negative sample.
Triple has two kinds of combinations:During using certificate photo image as reference sample, positive sample and negative sample are scene According to image;When shining image as reference sample using scene, positive sample and negative sample are certificate photo image.
Specifically, by taking certificate photo is reference picture as an example, the random certificate photo sample for selecting a people is concentrated from training data, The sample is known as reference sample, then randomly select one again and reference sample belong to the scene of same people in the same old way this as positive sample This, choose be not belonging to the scene of same people in the same old way this be used as negative sample, thus forming one, (reference sample, positive sample, bear sample This) triple.
I.e. positive sample and reference sample are similar sample, that is, belong to same people's image.Negative sample is the foreign peoples of reference sample Sample, that is, be not belonging to the image of same people.Wherein, the reference sample in triple element and positive sample have been marked in training sample Note, negative sample is in the training process of convolutional neural networks, using OHEM (Online Hard Example Mining) plan Triple is slightly constructed online, i.e., during each iteration optimization of network, before being carried out using current network to candidate's triple To calculating, select to be not belonging to same user in training sample with reference sample, and the nearest image of COS distance is as negative sample, So as to obtain the corresponding triple element of each training sample.
In one embodiment, according to training sample training convolutional neural networks, and the corresponding ternary of each training sample is produced The step of constituent element element, comprises the following steps S1 and S2:
S1:One image of random selection belongs to same label object and reference sample classification not as sample, selection is referred to Same image is as positive sample.
Classification refers to affiliated image type, and in the present embodiment, the classification of training sample includes scene facial image and card Part facial image.Because face authentication is mainly the contrast between certificate photo and scene photograph, therefore, reference sample and positive sample should When belonging to different classifications, if reference sample is scene facial image, positive sample is certificate facial image;If reference sample is Certificate facial image, then positive sample is scene facial image.
S2:According to OHEM strategies, using the COS distance between currently trained convolutional neural networks model extraction feature, For each reference sample, it is not belonging to from other in the image of same label object, chosen distance minimum and reference sample category In different classes of image, the negative sample as the reference sample.
Negative sample is selected from the facial image of label that same people is not belonging to reference sample, and specifically, negative sample exists In the training process of convolutional neural networks, using the online construction triple of OHEM strategies, the i.e. mistake in each iteration optimization of network Cheng Zhong, carries out forward calculation to candidate's triple using current network, selects to be not belonging to reference sample in training sample same User, and COS distance is not belonging to same category of image as negative sample recently, with reference sample.That is, negative sample and reference The classification of sample is different.If it is believed that using certificate photo as reference sample in triple, positive sample and negative sample are scenes According to;If on the contrary shone for reference sample with scene, in addition positive sample and negative sample are certificate photos.
S306, according to the triple element of each training sample, based on the supervision of triple loss function, training convolutional nerve Network model, the triple loss function, using COS distance as metric form, optimizes mould by stochastic gradient descent algorithm Shape parameter.
Whether testimony of a witness verification terminal is shone with scene according to unanimously testing user identity by comparing user certificate chip Card, background acquisition to data be often that the sample of single people only has two figures, i.e., certificate photo is with comparing the field captured constantly Jing Zhao, and the quantity of Different Individual can be thousands of.If the few data of this larger and similar sample of categorical measure with Be trained based on the method for classification, classification layer parameter can be excessively huge and cause network to be very difficult to learn, therefore consider Solved with the method for metric learning.Wherein the typical of metric learning is usually to lose (triplet loss) side with triple Method, learns a kind of effective Feature Mapping, the characteristic distance of similar sample is small under the mapping by structural map as triple In the characteristic distance of foreign peoples's sample, so as to achieve the purpose that correctly to compare.
The purpose of triple loss (triplet loss) is exactly by learning, allowing reference sample and the mark sheet of positive sample Up to the distance between it is as small as possible, and the distance between feature representation of reference sample and negative sample is as big as possible, and to allow The distance between feature representation of the distance between reference sample and the feature representation of positive sample and reference sample and negative sample it Between have a minimum interval.
In another embodiment, triple loss function includes the restriction to the COS distance of similar sample, and right The restriction of the COS distance of foreign peoples's sample.
Wherein, similar sample refers to reference sample and positive sample, and foreign peoples's sample refers to reference sample and negative sample.Similar sample This COS distance refers to reference sample and the COS distance of positive sample, and the COS distance of foreign peoples's sample refers to reference sample and bears The COS distance of sample.
On the one hand, gap is without considering gap in class between original triplet loss methods simply consider class, such as Fruit distribution within class is not enough amassed wealth by heavy taxation, and the generalization ability of network will weaken, and scene adaptability can also be decreased.On the other hand, Original triplet loss methods measure the similarity between sample, actually faceform portion using Euclidean distance Link can more be measured using COS distance in aspect ratio after administration.What Euclidean distance was weighed is spatial points Absolute distance, it is directly related with the position coordinates where each point;And COS distance weigh be space vector angle, more The difference being embodied on direction, rather than position, so as to more meet the properties of distributions in face characteristic space.
Using triplet loss methods, network is inputted by constructing triple data online, then backpropagation ternary The measurement of group is lost to be iterated optimization.Each triple includes three images, is a reference sample respectively, one with The similar positive sample of reference sample, and one with the negative sample of reference sample foreign peoples, labeled as (anchor, positive, negative).The basic thought of original triplet loss is, by between metric learning reference sample and positive sample Distance is less than the distance between reference sample and negative sample, and is more than a minimum interval parameter alpha apart from its difference.Therefore it is original Triplet loss loss functions it is as follows:
Wherein, N is triple quantity,Represent the feature vector of reference sample (anchor),Represent similar The feature vector of positive sample (positive),Represent the feature vector of foreign peoples's negative sample (negative).Represent L2 Normal form, i.e. Euclidean distance.[·]+Implication it is as follows:
It can be seen that from above formula, original triplet loss functions merely define similar sample (anchor, positive) The distance between foreign peoples's sample (anchor, negative), i.e., increase between class distance as far as possible by spacing parameter α, and right Inter- object distance is not limited in any way, i.e., does not make any constraint to the distance between similar sample.If inter- object distance is more dispersed, Variance is excessive, and the generalization ability of network will weaken, and sample will bigger by the probability of mistake point.Fig. 4 be spaced between class it is consistent, In the case of variance within clusters are larger, the wrong probability schematic diagram divided of sample, Fig. 5 is that consistent, the smaller situation of variance within clusters is spaced between class Under, the wrong probability schematic diagram divided of sample, as shown in Figure 4 and Figure 5, dash area represent the probability of wrong point of sample, are spaced between class Unanimously, in the case of variance within clusters are larger, the wrong probability divided of sample is spaced consistent, the smaller situation of variance within clusters between being significantly greater than class Under, the wrong probability divided of sample.
In view of the above-mentioned problems, the present invention proposes improved triplet loss methods, on the one hand remain in original method Restriction between class distance, while add the bound term to inter- object distance so that inter- object distance is amassed wealth by heavy taxation as far as possible.Its loss letter Counting expression formula is:
Wherein, cos () represents COS distance, its calculation isN is triple quantity,Represent the feature vector of reference sample,Represent the feature vector of similar positive sample,Represent foreign peoples's negative sample Feature vector, []+Implication it is as follows:α1The spacing parameter between class, α2To be spaced ginseng in class Number.
Compared to original triplet loss functions, the metric forms of improved triplet loss functions by Euclidean away from From COS distance is changed to, the uniformity of training stage and deployment phase metric form can be so kept, improves feature learning Continuity.It is consistent with original triplet loss effects with stylish triplet loss functions Section 1, for increasing class Between gap, Section 2 with the addition of the distance restraint to (positive tuple) to similar sample, for reducing gap in class.α1Between between class Every parameter, value range is 0~0.2, α2For spacing parameter in class, value range is 0.8~1.0.Significantly, since It is to be measured with cosine manner, the similarity between obtained metric two samples of correspondence, thereforeIt is only negative in expression formula Tuple cosine similarity is in α1In the range of be more than positive tuple cosine similarity sample, just can really participate in training.
Based on improved triple loss function come training pattern, constraint is combined by Inter-class loss and Intra-class loss Come to model carry out backpropagation optimization train so that similar sample feature space as close possible to and foreign peoples's sample in spy Sign space is far as possible from improving the sense of model, so as to improve the reliability of face authentication.
S308, by verification collection data input convolutional neural networks, when reaching trained termination condition, obtains trained be used for The convolutional neural networks of face authentication.
Specifically, 90% data are taken from testimony of a witness view data pond, and as training set, residue 10% is as verification collection.It is based on Above formula calculates improved triplet loss values, feeds back in convolutional neural networks and is iterated optimization.Observe mould at the same time The performance that type is concentrated in verification, when verifying that performance no longer raises, model reaches convergence state, and the training stage terminates.
Above-mentioned face authentication method, on the one hand adds to sample in class in the loss function of original triplet loss The constraint of this distance, so as to reduce gap in class, the generalization ability of lift scheme while gap between increasing class;The opposing party Face, is changed to COS distance by Euclidean distance by the metric form of original triplet loss, keeps the measurement one of training and deployment Cause property, improves the continuity of feature learning.
In another embodiment, the step of training convolutional neural networks further include:Increase income face number using based on magnanimity Initialized according to trained basic model parameter, addition normalization layer and improved triple damage after feature output layer Function layer is lost, obtains convolutional neural networks to be trained.
Specifically, it is conventional to be instructed based on internet mass human face data when solving testimony of a witness unification problem with deep learning The testimony of a witness of the depth human face recognition model got under special scenes compares and can decline to a great extent using upper performance, and application-specific Testimony of a witness data source under scene is again than relatively limited, and directly study is often due to sample deficiency causes training result undesirable, Therefore pole need to research and develop it is a kind of be effectively extended trained method for the contextual data of small data set, to lift face knowledge Accuracy rate of the other model under application-specific scene, meets market application demand.
Deep learning algorithm tends to rely on the training of mass data, and in testimony of a witness unification application, certificate photo shines with scene Comparison is belonged to heterogeneous sample and compares problem, the conventional depth recognition of face mould trained based on magnanimity internet human face data Type is compared in the testimony of a witness and can declined to a great extent using upper performance.But testimony of a witness data source is limited (needs to be provided simultaneously with same person ID Card Image and corresponding scene image), less available for trained data volume, direct training can be caused due to sample deficiency Training effect is undesirable, therefore when carrying out the model training of testimony of a witness unification with deep learning, often utilizes transfer learning Thought, first the internet human face data based on magnanimity train one in the basic model of dependable performance on test set of increasing income, so Secondary spread training is carried out in limited testimony of a witness data again afterwards, model is learnt the character representation of modality-specific automatically, carries Rise model performance.This process is as shown in Figure 6.
During second training, whole network is initialized with the good basic model parameter of pre-training, Ran Hou A L2 normalization layer and triplet loss layers improved, convolution to be trained are added after the feature output layer of network Neural network structure figure is as shown in Figure 7.
In one embodiment, a kind of flow diagram of face authentication method is as shown in figure 8, including three phases, difference For data acquisition and pretreatment stage, training stage and deployment phase.
Data acquisition and pretreatment stage, read certificate chip by the card reader module of testimony of a witness verification terminal equipment and shine, with And front camera crawl scene photograph, by human-face detector, Keypoint detector, face alignment with being obtained after shear module To the normalized certificate facial image of size and scene facial image.
Training stage, 90% data are taken from testimony of a witness view data pond, and as training set, residue 10% is as verification collection.By Comparison between the testimony of a witness compares mainly certificate photo and scene is shone, if because using certificate photo as reference chart in triple (anchor), then other two figures are that scene is shone;If on the contrary shone for reference chart with scene, in addition two figures are certificates According to.Construct the strategy of triple online using OHEM, i.e., during each iteration optimization of network, using current network to waiting Triple is selected to carry out forward calculation, screening meets effective triple of condition, improved triplet is calculated according to above formula Loss values, feed back in network and are iterated optimization.At the same time observation model verification concentrate performance, when verification performance not When raising again, model reaches convergence state, and the training stage terminates.
Deployment phase, is deployed to testimony of a witness verification terminal by trained model and carries out in use, the image that equipment collects By the preprocessor identical with the training stage, then obtained by network forward calculation the feature of every facial image to Amount, obtains the similarity of two images by calculating COS distance, is then made decisions according to predetermined threshold value, more than predetermined threshold value For same people, otherwise is different people.
Above-mentioned face authentication method, original triplet loss functions merely define the study relation of between class distance, on The face authentication method stated, the bound term of inter- object distance is added by improving original triplet loss loss functions, can be with So that network reduces gap in class as far as possible while increasing gap between class in the training process, so as to improve the extensive energy of network Power, and then the scene adaptability of lift scheme.In addition, it instead of the Euclidean distance in original triplet loss with COS distance Metric form, more meets the properties of distributions in face characteristic space, and it is consistent with deployment phase metric form to maintain the training stage Property so that comparison result is relatively reliable.
In one embodiment, there is provided a kind of face authentication device, as shown in figure 9, including:Image collection module 902, figure As pretreatment module 904, feature acquisition module 906, computing module 908 and authentication module 910.
Image collection module 902, for being asked based on face authentication, obtains certificate photograph and the scene photo of personage.
Image pre-processing module 904, for carrying out Face datection, crucial point location respectively to scene photo and certificate photograph And image preprocessing, obtain the corresponding scene facial image of scene photo, and the corresponding certificate facial image of certificate photograph.
Feature acquisition module 906, for scene facial image and certificate facial image to be input to advance trained use In the convolutional neural networks model of face authentication, and obtain the scene facial image corresponding the of convolutional neural networks model output One feature vector, and the corresponding second feature vector of certificate facial image;Wherein, convolutional neural networks model is based on triple The supervised training of loss function obtains.
Computing module 908, for calculating the COS distance of first eigenvector and second feature vector.
Authentication module 910, face authentication knot is determined for comparing COS distance and predetermined threshold value, and according to comparative result Fruit.
Above-mentioned face authentication device, face authentication is carried out using convolutional neural networks trained in advance, due to convolution god Through network model, the supervised training based on improved triple loss function obtains, and scene facial image and certificate face figure The similarity of picture is according to the corresponding first eigenvector of scene facial image and the corresponding second feature vector of certificate facial image COS distance be calculated, COS distance weigh be space vector angle, the difference being more embodied on direction, without It is position, so as to more meet the properties of distributions in face characteristic space, improves the reliability of face authentication.
As shown in figure 9, in another embodiment, face authentication device further includes:Sample acquisition module 912, triple Acquisition module 914, training module 916 and authentication module 918.
Sample acquisition module 912, for obtaining the training sample of tape label, the training sample includes marked belonging to every One certificate facial image of a tagged object and at least a scene facial image.
Triple acquisition module 914, for according to training sample training convolutional neural networks model, being produced by OHEM each The corresponding triple element of training sample;Triple element includes reference sample, positive sample and negative sample.
Specifically, triple acquisition module 914, belongs to same for randomly choosing an image as sample, selection is referred to One label object, the image different from reference sample classification are additionally operable to, according to OHEM strategies, utilize current training as positive sample Convolutional neural networks model extraction feature between COS distance, for each reference sample, from it is other have be not belonging to The image that in the facial image of same label object, chosen distance is minimum, belongs to a different category with reference sample, as the reference The negative sample of sample.
Specifically, using certificate photo as when referring to sample, positive sample and negative sample are that scene is shone;Shone using scene and be used as ginseng When examining sample, positive sample and negative sample are certificate photo.
Training module 916, for the triple element according to each training sample, based on the supervision of triple loss function, Training convolutional neural networks model, the triple loss function, using COS distance as metric form, passes through stochastic gradient descent Algorithm carrys out Optimized model parameter.
Specifically, modified triple loss function includes the restriction to the COS distance of similar sample, and to foreign peoples The restriction of the COS distance of sample.
Modified triple loss function is:
Wherein, cos () represents COS distance, its calculation isN is triple quantity,Represent the feature vector of reference sample,Represent the feature vector of similar positive sample,Represent foreign peoples's negative sample Feature vector, []+Implication it is as follows:α1The spacing parameter between class, α2To be spaced ginseng in class Number.
Authentication module 918, for verification collection data to be inputted convolutional neural networks model, when reaching trained termination condition, Obtain the trained convolutional neural networks model for face authentication.
In another embodiment, face authentication device further includes model initialization module 920, and magnanimity is based on for utilizing The trained basic model parameter of human face data of increasing income is initialized, addition normalization layer and triple after feature output layer Loss function layer, obtains convolutional neural networks to be trained.Above-mentioned face authentication device, on the one hand in original triplet The constraint to sample distance in class is added in the loss function of loss, so as to reduce class internal difference while gap between increasing class Away from the generalization ability of lift scheme;On the other hand, the metric form of original triplet loss is changed to cosine by Euclidean distance Distance, keeps the measurement uniformity of training and deployment, improves the continuity of feature learning.
A kind of computer equipment, including memory, processor and storage can be run on a memory and on a processor The step of computer program, processor realizes the face authentication method of the various embodiments described above when performing computer program.
A kind of storage medium, is stored thereon with computer program, it is characterised in that the computer program is executed by processor When, the step of realizing the face authentication method of the various embodiments described above.
Each technical characteristic of embodiment described above can be combined arbitrarily, to make description succinct, not to above-mentioned reality Apply all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited In contradiction, the scope that this specification is recorded all is considered to be.
Embodiment described above only expresses the several embodiments of the present invention, its description is more specific and detailed, but simultaneously Cannot therefore it be construed as limiting the scope of the patent.It should be pointed out that come for those of ordinary skill in the art Say, without departing from the inventive concept of the premise, various modifications and improvements can be made, these belong to the protection of the present invention Scope.Therefore, the protection domain of patent of the present invention should be determined by the appended claims.

Claims (10)

1. a kind of face authentication method based on Triplet Loss, including:
Asked based on face authentication, obtain certificate photograph and the scene photo of personage;
Carry out Face datection, crucial point location and image preprocessing respectively to the scene photo and the certificate photograph, obtain The corresponding scene facial image of the scene photo, and the corresponding certificate facial image of the certificate photograph;
The scene facial image and certificate facial image are input to the advance trained convolutional Neural for face authentication Network model, and the corresponding first eigenvector of the scene facial image of the convolutional neural networks model output is obtained, And the corresponding second feature vector of the certificate facial image;Wherein, the convolutional neural networks model is damaged based on triple The supervised training for losing function obtains;
Calculate the COS distance of the first eigenvector and second feature vector;
Compare the COS distance and predetermined threshold value, and face authentication result is determined according to comparative result.
2. according to the method described in claim 1, it is characterized in that, the method further includes:
The training sample of tape label is obtained, the training sample includes marked a certificate face for belonging to each tagged object Image and at least a scene facial image;
According to the training sample training convolutional neural networks model, the corresponding ternary constituent element of each training sample is produced by OHEM Element;The triple element includes reference sample, positive sample and negative sample;
According to the triple element of each training sample, based on the supervision of triple loss function, the training convolutional neural networks Model;The triple loss function, using COS distance as metric form, is joined by stochastic gradient descent algorithm come Optimized model Number;
Verification collection data are inputted into the convolutional neural networks model, when reaching trained termination condition, obtain trained be used for The convolutional neural networks model of face authentication.
3. according to the method described in claim 2, it is characterized in that, according to the training sample training convolutional neural networks mould Type, the step of corresponding triple element of each training sample is produced by OHEM, including:
One image of random selection belongs to same label object, the figure different from reference sample classification as sample, selection is referred to As being used as positive sample;
According to OHEM strategies, using the COS distance between currently trained convolutional neural networks model extraction feature, for every One reference sample, is not belonging in the image of same label object from other, and chosen distance is minimum, belongs to the reference sample Different classes of image, the negative sample as the reference sample.
4. according to the method described in claim 2, it is characterized in that, the triple loss function is included to more than similar sample The restriction of chordal distance, and the restriction of the COS distance to foreign peoples's sample.
5. according to the method described in claim 4, it is characterized in that, the triple loss function is:
<mrow> <mover> <mi>L</mi> <mo>~</mo> </mover> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mi>i</mi> <mi>N</mi> </munderover> <msub> <mrow> <mo>&amp;lsqb;</mo> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>a</mi> </msubsup> <mo>)</mo> <mo>,</mo> <mi>f</mi> <mo>(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>n</mi> </msubsup> <mo>)</mo> <mo>)</mo> </mrow> <mo>-</mo> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>a</mi> </msubsup> <mo>)</mo> <mo>,</mo> <mi>f</mi> <mo>(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>p</mi> </msubsup> <mo>)</mo> <mo>)</mo> </mrow> <mo>+</mo> <msub> <mi>&amp;alpha;</mi> <mn>1</mn> </msub> <mo>&amp;rsqb;</mo> </mrow> <mo>+</mo> </msub> <mo>+</mo> <munderover> <mo>&amp;Sigma;</mo> <mi>i</mi> <mi>N</mi> </munderover> <msub> <mrow> <mo>&amp;lsqb;</mo> <msub> <mi>&amp;alpha;</mi> <mn>2</mn> </msub> <mo>-</mo> <mi>c</mi> <mi>o</mi> <mi>s</mi> <mrow> <mo>(</mo> <mi>f</mi> <mo>(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>a</mi> </msubsup> <mo>)</mo> <mo>,</mo> <mi>f</mi> <mo>(</mo> <msubsup> <mi>x</mi> <mi>i</mi> <mi>p</mi> </msubsup> <mo>)</mo> <mo>)</mo> </mrow> <mo>&amp;rsqb;</mo> </mrow> <mo>+</mo> </msub> </mrow>
Wherein, cos () represents COS distance, its calculation isN is triple quantity,Table Show the feature vector of reference sample,Represent the feature vector of similar positive sample,Represent the feature of foreign peoples's negative sample Vector, []+Implication it is as follows:α1The spacing parameter between class, α2For spacing parameter in class.
6. according to the method described in claim 2, it is characterized in that, the method further includes:Increase income face using based on magnanimity The trained basic model parameter of data is initialized, addition normalization layer and triple loss function after feature output layer Layer, obtains convolutional neural networks model to be trained.
7. a kind of face authentication device based on Triplet Loss, including:Image collection module, image pre-processing module, spy Levy acquisition module, computing module and authentication module;
Described image acquisition module, for being asked based on face authentication, obtains certificate photograph and the scene photo of personage;
Described image pretreatment module, for carrying out Face datection, key respectively to the scene photo and the certificate photograph Point location and image preprocessing, obtain the corresponding scene facial image of the scene photo, and the certificate photograph is corresponding Certificate facial image;
The feature acquisition module, for the scene facial image and certificate facial image to be input to advance trained use In the convolutional neural networks model of face authentication, and obtain the scene facial image of the convolutional neural networks model output Corresponding first eigenvector, and the corresponding second feature vector of the certificate facial image;Wherein, the convolutional Neural net Supervised training of the network model based on triple loss function obtains;
The computing module, for calculating the COS distance of the first eigenvector and second feature vector;
The authentication module, face authentication knot is determined for the COS distance and predetermined threshold value, and according to comparative result Fruit.
8. device according to claim 7, it is characterised in that described device further includes:Sample acquisition module, triple obtain Modulus block, training module and authentication module;
The sample acquisition module, for obtaining the training sample of tape label, the training sample includes marked belonging to each One certificate facial image of tagged object and at least a scene facial image;
The triple acquisition module, for according to the training sample training convolutional neural networks model, being produced by OHEM The corresponding triple element of each training sample;The triple element includes reference sample, positive sample and negative sample;
The training module, for the triple element according to each training sample, based on the supervision of triple loss function, training The convolutional neural networks model;The triple loss function, using COS distance as metric form, passes through stochastic gradient descent Algorithm carrys out Optimized model parameter;
The authentication module, for verification collection data to be inputted the convolutional neural networks model, when reaching trained termination condition, Obtain the trained convolutional neural networks model for face authentication.
9. a kind of computer equipment, including memory, processor and storage are on a memory and the meter that can run on a processor Calculation machine program, it is characterised in that the processor is realized described in any one of claim 1 to 6 when performing the computer program The face authentication method based on Triplet Loss the step of.
10. a kind of storage medium, is stored thereon with computer program, it is characterised in that the computer program is executed by processor When, the step of realizing face authentication method of claim 1 to 6 any one of them based on Triplet Loss.
CN201711436879.4A 2017-12-26 2017-12-26 Triple Loss-based face authentication method and device, computer equipment and storage medium Active CN108009528B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711436879.4A CN108009528B (en) 2017-12-26 2017-12-26 Triple Loss-based face authentication method and device, computer equipment and storage medium
PCT/CN2018/109169 WO2019128367A1 (en) 2017-12-26 2018-09-30 Face verification method and apparatus based on triplet loss, and computer device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711436879.4A CN108009528B (en) 2017-12-26 2017-12-26 Triple Loss-based face authentication method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN108009528A true CN108009528A (en) 2018-05-08
CN108009528B CN108009528B (en) 2020-04-07

Family

ID=62061566

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711436879.4A Active CN108009528B (en) 2017-12-26 2017-12-26 Triple Loss-based face authentication method and device, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN108009528B (en)
WO (1) WO2019128367A1 (en)

Cited By (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921033A (en) * 2018-06-04 2018-11-30 北京京东金融科技控股有限公司 Face picture comparison method, device, medium and electronic equipment
CN108921952A (en) * 2018-06-15 2018-11-30 深圳大学 Object functionality prediction technique, device, computer equipment and storage medium
CN108922542A (en) * 2018-06-01 2018-11-30 平安科技(深圳)有限公司 Acquisition methods, device, computer equipment and the storage medium of sample triple
CN108960342A (en) * 2018-08-01 2018-12-07 中国计量大学 Based on the image similarity calculation method for improving SoftMax loss function
CN108960209A (en) * 2018-08-09 2018-12-07 腾讯科技(深圳)有限公司 Personal identification method, device and computer readable storage medium
CN108985198A (en) * 2018-07-02 2018-12-11 四川斐讯信息技术有限公司 A kind of COS distance calculation method based on big data feature vector
CN109145991A (en) * 2018-08-24 2019-01-04 北京地平线机器人技术研发有限公司 Image group generation method, image group generating means and electronic equipment
CN109145956A (en) * 2018-07-26 2019-01-04 上海慧子视听科技有限公司 Methods of marking, device, computer equipment and storage medium
CN109145704A (en) * 2018-06-14 2019-01-04 西安电子科技大学 A kind of human face portrait recognition methods based on face character
CN109165589A (en) * 2018-08-14 2019-01-08 北京颂泽科技有限公司 Vehicle based on deep learning recognition methods and device again
CN109214361A (en) * 2018-10-18 2019-01-15 康明飞(北京)科技有限公司 A kind of face identification method and device and ticket verification method and device
CN109271877A (en) * 2018-08-24 2019-01-25 北京智芯原动科技有限公司 A kind of human figure identification method and device
CN109344740A (en) * 2018-09-12 2019-02-15 上海了物网络科技有限公司 Face identification system, method and computer readable storage medium
CN109359541A (en) * 2018-09-17 2019-02-19 南京邮电大学 A kind of sketch face identification method based on depth migration study
CN109492583A (en) * 2018-11-09 2019-03-19 安徽大学 A kind of recognition methods again of the vehicle based on deep learning
CN109522850A (en) * 2018-11-22 2019-03-26 中山大学 A kind of movement similarity estimating method based on small-sample learning
CN109543524A (en) * 2018-10-18 2019-03-29 同盾控股有限公司 A kind of image-recognizing method, device
CN109685121A (en) * 2018-12-11 2019-04-26 中国科学院苏州纳米技术与纳米仿生研究所 Training method, image search method, the computer equipment of image encrypting algorithm
CN109685106A (en) * 2018-11-19 2019-04-26 深圳博为教育科技有限公司 A kind of image-recognizing method, face Work attendance method, device and system
CN109711358A (en) * 2018-12-28 2019-05-03 四川远鉴科技有限公司 Neural network training method, face identification method and system and storage medium
CN109711443A (en) * 2018-12-14 2019-05-03 平安城市建设科技(深圳)有限公司 Floor plan recognition methods, device, equipment and storage medium neural network based
CN109815801A (en) * 2018-12-18 2019-05-28 北京英索科技发展有限公司 Face identification method and device based on deep learning
CN109886186A (en) * 2019-02-18 2019-06-14 上海骏聿数码科技有限公司 A kind of face identification method and device
CN109948568A (en) * 2019-03-26 2019-06-28 东华大学 Embedded human face identifying system based on ARM microprocessor and deep learning
WO2019128367A1 (en) * 2017-12-26 2019-07-04 广州广电运通金融电子股份有限公司 Face verification method and apparatus based on triplet loss, and computer device and storage medium
CN110147732A (en) * 2019-04-16 2019-08-20 平安科技(深圳)有限公司 Refer to vein identification method, device, computer equipment and storage medium
CN110147833A (en) * 2019-05-09 2019-08-20 北京迈格威科技有限公司 Facial image processing method, apparatus, system and readable storage medium storing program for executing
CN110213660A (en) * 2019-05-27 2019-09-06 广州荔支网络技术有限公司 Distribution method, system, computer equipment and the storage medium of program
CN110598840A (en) * 2018-06-13 2019-12-20 富士通株式会社 Knowledge migration method, information processing apparatus, and storage medium
CN110674688A (en) * 2019-08-19 2020-01-10 深圳力维智联技术有限公司 Face recognition model acquisition method, system and medium for video monitoring scene
CN110738071A (en) * 2018-07-18 2020-01-31 浙江中正智能科技有限公司 face algorithm model training method based on deep learning and transfer learning
CN110765933A (en) * 2019-10-22 2020-02-07 山西省信息产业技术研究院有限公司 Dynamic portrait sensing comparison method applied to driver identity authentication system
CN110874602A (en) * 2018-08-30 2020-03-10 北京嘀嘀无限科技发展有限公司 Image identification method and device
CN110929099A (en) * 2019-11-28 2020-03-27 杭州趣维科技有限公司 Short video frame semantic extraction method and system based on multitask learning
CN111062338A (en) * 2019-12-19 2020-04-24 厦门商集网络科技有限责任公司 Certificate portrait consistency comparison method and system
CN111178249A (en) * 2019-12-27 2020-05-19 杭州艾芯智能科技有限公司 Face comparison method and device, computer equipment and storage medium
CN111325223A (en) * 2018-12-13 2020-06-23 中国电信股份有限公司 Deep learning model training method and device and computer readable storage medium
WO2020125404A1 (en) * 2018-12-19 2020-06-25 北京世纪好未来教育科技有限公司 Method and apparatus for constructing neural network and computer-readable medium
CN111368644A (en) * 2020-02-14 2020-07-03 深圳市商汤科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111401277A (en) * 2020-03-20 2020-07-10 深圳前海微众银行股份有限公司 Face recognition model updating method, device, equipment and medium
CN111401257A (en) * 2020-03-17 2020-07-10 天津理工大学 Non-constraint condition face recognition method based on cosine loss
WO2020147408A1 (en) * 2019-01-16 2020-07-23 平安科技(深圳)有限公司 Facial recognition model evaluation method and apparatus, and storage medium and computer device
CN111461152A (en) * 2019-01-21 2020-07-28 同方威视技术股份有限公司 Cargo detection method and device, electronic equipment and computer readable medium
CN111709313A (en) * 2020-05-27 2020-09-25 杭州电子科技大学 Pedestrian re-identification method based on local and channel combination characteristics
CN111723222A (en) * 2019-03-19 2020-09-29 Sap欧洲公司 Image search and training system
CN111832364A (en) * 2019-04-22 2020-10-27 普天信息技术有限公司 Face recognition method and device
CN112052821A (en) * 2020-09-15 2020-12-08 浙江智慧视频安防创新中心有限公司 Fire fighting channel safety detection method, device, equipment and storage medium
CN112116021A (en) * 2020-09-27 2020-12-22 广州华多网络科技有限公司 Gem similarity measurement data processing method and related equipment
CN112492383A (en) * 2020-12-03 2021-03-12 珠海格力电器股份有限公司 Video frame generation method and device, storage medium and electronic equipment
CN112580406A (en) * 2019-09-30 2021-03-30 北京中关村科金技术有限公司 Face comparison method and device and storage medium
WO2021073150A1 (en) * 2019-10-16 2021-04-22 平安科技(深圳)有限公司 Data detection method and apparatus, and computer device and storage medium
CN112949780A (en) * 2020-04-21 2021-06-11 佳都科技集团股份有限公司 Feature model training method, device, equipment and storage medium
CN113065495A (en) * 2021-04-13 2021-07-02 深圳技术大学 Image similarity calculation method, target object re-identification method and system
CN113269155A (en) * 2021-06-28 2021-08-17 苏州市科远软件技术开发有限公司 End-to-end face recognition method, device, equipment and storage medium
CN113283359A (en) * 2021-06-02 2021-08-20 万达信息股份有限公司 Authentication method and system for handheld certificate photo and electronic equipment
CN113344875A (en) * 2021-06-07 2021-09-03 武汉象点科技有限公司 Abnormal image detection method based on self-supervision learning
CN113486804A (en) * 2021-07-07 2021-10-08 科大讯飞股份有限公司 Object identification method, device, equipment and storage medium
CN113538075A (en) * 2020-04-14 2021-10-22 阿里巴巴集团控股有限公司 Data processing method, model training method, device and equipment
CN113642468A (en) * 2021-08-16 2021-11-12 中国银行股份有限公司 Identity authentication method and device
CN113688793A (en) * 2021-09-22 2021-11-23 万章敏 Training method of face model and face recognition system
CN114283483A (en) * 2021-12-31 2022-04-05 中国联合网络通信集团有限公司 Face recognition method and device
CN114387457A (en) * 2021-12-27 2022-04-22 腾晖科技建筑智能(深圳)有限公司 Face intra-class interval optimization method based on parameter adjustment
CN116127298A (en) * 2023-02-22 2023-05-16 北京邮电大学 Small sample radio frequency fingerprint identification method based on triplet loss
WO2023093838A1 (en) * 2021-11-25 2023-06-01 北京字跳网络技术有限公司 Super-resolution image processing method and apparatus, and device and medium
US20230169792A1 (en) * 2021-11-26 2023-06-01 GenkiTek Technology Co., Ltd. System and method of mode selection face recognition with parallel cnns
EP4383212A1 (en) * 2022-12-06 2024-06-12 Bundesdruckerei GmbH Authentication device and method for authenticating a person by means of an identification document associated with the person, and identification document and method for producing
CN118337532A (en) * 2024-06-13 2024-07-12 浙江鹏信信息科技股份有限公司 Zero trust-based traffic safety audit protection method and system and readable medium

Families Citing this family (77)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110516533B (en) * 2019-07-11 2023-06-02 同济大学 Pedestrian re-identification method based on depth measurement
CN110414431B (en) * 2019-07-29 2022-12-27 广州像素数据技术股份有限公司 Face recognition method and system based on elastic context relation loss function
CN110647880A (en) * 2019-08-12 2020-01-03 深圳市华付信息技术有限公司 Mobile terminal identity card image shielding judgment method
CN110458233B (en) * 2019-08-13 2024-02-13 腾讯云计算(北京)有限责任公司 Mixed granularity object recognition model training and recognition method, device and storage medium
CN110705357A (en) * 2019-09-02 2020-01-17 深圳中兴网信科技有限公司 Face recognition method and face recognition device
CN110555478B (en) * 2019-09-05 2023-02-03 东北大学 Fan multi-fault diagnosis method based on depth measurement network of difficult sample mining
CN111008550A (en) * 2019-09-06 2020-04-14 上海芯灵科技有限公司 Identification method for finger vein authentication identity based on Multiple loss function
CN110674637B (en) * 2019-09-06 2023-07-11 腾讯科技(深圳)有限公司 Character relationship recognition model training method, device, equipment and medium
CN110705393B (en) * 2019-09-17 2023-02-03 中国计量大学 Method for improving face recognition performance of community population
CN110647938B (en) * 2019-09-24 2022-07-15 北京市商汤科技开发有限公司 Image processing method and related device
CN112560880A (en) * 2019-09-25 2021-03-26 中国电信股份有限公司 Object classification method, object classification apparatus, and computer-readable storage medium
CN112733574B (en) * 2019-10-14 2023-04-07 中移(苏州)软件技术有限公司 Face recognition method and device and computer readable storage medium
CN110796057A (en) * 2019-10-22 2020-02-14 上海交通大学 Pedestrian re-identification method and device and computer equipment
CN110852367B (en) * 2019-11-05 2023-10-31 上海联影智能医疗科技有限公司 Image classification method, computer device, and storage medium
CN110956098B (en) * 2019-11-13 2023-05-12 深圳数联天下智能科技有限公司 Image processing method and related equipment
CN111126360B (en) * 2019-11-15 2023-03-24 西安电子科技大学 Cross-domain pedestrian re-identification method based on unsupervised combined multi-loss model
CN111222411B (en) * 2019-11-28 2023-09-01 中国船舶重工集团公司第七一三研究所 Laser emission safety rapid alarm method and device
CN111079566B (en) * 2019-11-28 2023-05-02 深圳市信义科技有限公司 Large-scale face recognition model optimization system
CN111144240B (en) * 2019-12-12 2023-02-07 深圳数联天下智能科技有限公司 Image processing method and related equipment
CN111062430B (en) * 2019-12-12 2023-05-09 易诚高科(大连)科技有限公司 Pedestrian re-identification evaluation method based on probability density function
CN111091089B (en) * 2019-12-12 2022-07-29 新华三大数据技术有限公司 Face image processing method and device, electronic equipment and storage medium
CN111126240B (en) * 2019-12-19 2023-04-07 西安工程大学 Three-channel feature fusion face recognition method
CN111191563A (en) * 2019-12-26 2020-05-22 三盟科技股份有限公司 Face recognition method and system based on data sample and test data set training
CN111241925B (en) * 2019-12-30 2023-08-18 新大陆数字技术股份有限公司 Face quality assessment method, system, electronic equipment and readable storage medium
CN111209839B (en) * 2019-12-31 2023-05-23 上海涛润医疗科技有限公司 Face recognition method
CN111198964B (en) * 2020-01-10 2023-04-25 中国科学院自动化研究所 Image retrieval method and system
CN111274946B (en) * 2020-01-19 2023-05-05 杭州涂鸦信息技术有限公司 Face recognition method, system and equipment
CN113362096A (en) * 2020-03-04 2021-09-07 驰众信息技术(上海)有限公司 Frame advertisement image matching method based on deep learning
CN111368766B (en) * 2020-03-09 2023-08-18 云南安华防灾减灾科技有限责任公司 Deep learning-based cow face detection and recognition method
CN111539247B (en) * 2020-03-10 2023-02-10 西安电子科技大学 Hyper-spectrum face recognition method and device, electronic equipment and storage medium thereof
CN111429414B (en) * 2020-03-18 2023-04-07 腾讯科技(深圳)有限公司 Artificial intelligence-based focus image sample determination method and related device
CN111414862B (en) * 2020-03-22 2023-03-24 西安电子科技大学 Expression recognition method based on neural network fusion key point angle change
CN111507289A (en) * 2020-04-22 2020-08-07 上海眼控科技股份有限公司 Video matching method, computer device and storage medium
CN111582107B (en) * 2020-04-28 2023-09-29 浙江大华技术股份有限公司 Training method and recognition method of target re-recognition model, electronic equipment and device
CN111639535B (en) * 2020-04-29 2023-08-22 深圳英飞拓智能技术有限公司 Face recognition method and device based on deep learning
CN111626212B (en) * 2020-05-27 2023-09-26 腾讯科技(深圳)有限公司 Method and device for identifying object in picture, storage medium and electronic device
CN113807122A (en) * 2020-06-11 2021-12-17 阿里巴巴集团控股有限公司 Model training method, object recognition method and device, and storage medium
CN111738157B (en) * 2020-06-23 2023-07-21 平安科技(深圳)有限公司 Face action unit data set construction method and device and computer equipment
CN112257738A (en) * 2020-07-31 2021-01-22 北京京东尚科信息技术有限公司 Training method and device of machine learning model and classification method and device of image
CN111988614B (en) * 2020-08-14 2022-09-13 深圳前海微众银行股份有限公司 Hash coding optimization method and device and readable storage medium
CN112069993B (en) * 2020-09-04 2024-02-13 西安西图之光智能科技有限公司 Dense face detection method and system based on five-sense organ mask constraint and storage medium
CN112084956A (en) * 2020-09-11 2020-12-15 上海交通大学烟台信息技术研究院 Special face crowd screening system based on small sample learning prototype network
CN112287765B (en) * 2020-09-30 2024-06-04 新大陆数字技术股份有限公司 Face living body detection method, device, equipment and readable storage medium
CN112329826B (en) * 2020-10-24 2024-10-18 中国人民解放军空军军医大学 Training method of image recognition model, image recognition method and device
CN112307968B (en) * 2020-10-30 2022-11-08 天地伟业技术有限公司 Face recognition feature compression method
CN112328786A (en) * 2020-11-03 2021-02-05 平安科技(深圳)有限公司 Text classification method and device based on BERT, computer equipment and storage medium
GB2600922B (en) * 2020-11-05 2024-04-10 Thales Holdings Uk Plc One shot learning for identifying data items similar to a query data item
CN112347957A (en) * 2020-11-12 2021-02-09 广联达科技股份有限公司 Pedestrian re-identification method and device, computer equipment and storage medium
CN112200154A (en) * 2020-11-17 2021-01-08 苏州方正璞华信息技术有限公司 Face recognition method and device for mask, electronic equipment and storage medium
CN112836566A (en) * 2020-12-01 2021-05-25 北京智云视图科技有限公司 Multitask neural network face key point detection method for edge equipment
CN112836719B (en) * 2020-12-11 2024-01-05 南京富岛信息工程有限公司 Indicator diagram similarity detection method integrating two classifications and triplets
CN112580011B (en) * 2020-12-25 2022-05-24 华南理工大学 Portrait encryption and decryption system facing biological feature privacy protection
CN112861626B (en) * 2021-01-04 2024-03-08 西北工业大学 Fine granularity expression classification method based on small sample learning
CN113762019B (en) * 2021-01-22 2024-04-09 北京沃东天骏信息技术有限公司 Training method of feature extraction network, face recognition method and device
CN112836629B (en) * 2021-02-01 2024-03-08 清华大学深圳国际研究生院 Image classification method
CN112966724B (en) * 2021-02-07 2024-04-09 惠州市博实结科技有限公司 Method and device for classifying image single categories
CN112766237A (en) * 2021-03-12 2021-05-07 东北林业大学 Unsupervised pedestrian re-identification method based on cluster feature point clustering
CN113157956B (en) * 2021-04-23 2022-08-05 雅马哈发动机(厦门)信息系统有限公司 Picture searching method, system, mobile terminal and storage medium
CN113344031B (en) * 2021-05-13 2022-12-27 清华大学 Text classification method
CN113469253B (en) * 2021-07-02 2024-05-14 河海大学 Electric larceny detection method based on triple twinning network
CN113705383A (en) * 2021-08-12 2021-11-26 南京英诺森软件科技有限公司 Cross-age face recognition method and system based on ternary constraint
CN113435545A (en) * 2021-08-14 2021-09-24 北京达佳互联信息技术有限公司 Training method and device of image processing model
CN113642481A (en) * 2021-08-17 2021-11-12 百度在线网络技术(北京)有限公司 Recognition method, training method, device, electronic equipment and storage medium
CN113569991B (en) * 2021-08-26 2024-05-28 深圳市捷顺科技实业股份有限公司 Person evidence comparison model training method, computer equipment and computer storage medium
CN113780461B (en) * 2021-09-23 2022-08-05 中国人民解放军国防科技大学 Robust neural network training method based on feature matching
CN113887653B (en) * 2021-10-20 2024-02-06 西安交通大学 Positioning method and system for tight coupling weak supervision learning based on ternary network
CN114049479A (en) * 2021-11-10 2022-02-15 苏州魔视智能科技有限公司 Self-supervision fisheye camera image feature point extraction method and device and storage medium
CN114445871A (en) * 2021-12-22 2022-05-06 天翼云科技有限公司 Intelligent video monitoring method and system for face recognition
CN114863425B (en) * 2022-03-31 2024-10-29 哈尔滨工程大学 Urine erythrocyte classification method based on supervised contrast learning
CN114882558B (en) * 2022-04-29 2024-02-23 陕西师范大学 Learning scene real-time identity authentication method based on face recognition technology
CN114663965B (en) * 2022-05-24 2022-10-21 之江实验室 Testimony comparison method and device based on two-stage alternative learning
CN114926445B (en) * 2022-05-31 2024-03-26 哈尔滨工业大学 Small sample crop disease image identification method and system based on twin network
CN114817888A (en) * 2022-06-27 2022-07-29 中国信息通信研究院 Certificate registering and issuing method, device and storage medium
CN116206355A (en) * 2023-04-25 2023-06-02 鹏城实验室 Face recognition model training, image registration and face recognition method and device
CN116959064B (en) * 2023-06-25 2024-04-26 上海腾桥信息技术有限公司 Certificate verification method and device, computer equipment and storage medium
CN116977461B (en) * 2023-06-30 2024-03-08 北京开普云信息科技有限公司 Portrait generation method, device, storage medium and equipment for specific scene
CN118410192B (en) * 2024-06-24 2024-10-11 广州思德医疗科技有限公司 Endoscopic image deduplication method, endoscopic image deduplication apparatus, endoscopic image deduplication computer device, endoscopic image deduplication program product, and endoscopic image deduplication program product

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599827A (en) * 2016-12-09 2017-04-26 浙江工商大学 Small target rapid detection method based on deep convolution neural network
CN107194341A (en) * 2017-05-16 2017-09-22 西安电子科技大学 The many convolution neural network fusion face identification methods of Maxout and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9129216B1 (en) * 2013-07-15 2015-09-08 Xdroid Kft. System, method and apparatus for computer aided association of relevant images with text
CN107423690B (en) * 2017-06-26 2020-11-13 广东工业大学 Face recognition method and device
CN108009528B (en) * 2017-12-26 2020-04-07 广州广电运通金融电子股份有限公司 Triple Loss-based face authentication method and device, computer equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599827A (en) * 2016-12-09 2017-04-26 浙江工商大学 Small target rapid detection method based on deep convolution neural network
CN107194341A (en) * 2017-05-16 2017-09-22 西安电子科技大学 The many convolution neural network fusion face identification methods of Maxout and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BRANDON,AMOS等: "OpenFace:A general-purpose face recognition library with mobile applications", 《CMU SCHOOL OF COMPUTER SCIENCE,TECH. REP.》 *

Cited By (91)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019128367A1 (en) * 2017-12-26 2019-07-04 广州广电运通金融电子股份有限公司 Face verification method and apparatus based on triplet loss, and computer device and storage medium
CN108922542A (en) * 2018-06-01 2018-11-30 平安科技(深圳)有限公司 Acquisition methods, device, computer equipment and the storage medium of sample triple
WO2019227613A1 (en) * 2018-06-01 2019-12-05 平安科技(深圳)有限公司 Method and device for obtaining triple of samples, computer device and storage medium
CN108921033A (en) * 2018-06-04 2018-11-30 北京京东金融科技控股有限公司 Face picture comparison method, device, medium and electronic equipment
CN110598840B (en) * 2018-06-13 2023-04-18 富士通株式会社 Knowledge migration method, information processing apparatus, and storage medium
CN110598840A (en) * 2018-06-13 2019-12-20 富士通株式会社 Knowledge migration method, information processing apparatus, and storage medium
CN109145704A (en) * 2018-06-14 2019-01-04 西安电子科技大学 A kind of human face portrait recognition methods based on face character
CN108921952A (en) * 2018-06-15 2018-11-30 深圳大学 Object functionality prediction technique, device, computer equipment and storage medium
WO2019237450A1 (en) * 2018-06-15 2019-12-19 深圳大学 Object functionality prediction method, computer device, and storage medium
US11288538B2 (en) 2018-06-15 2022-03-29 Shenzhen University Object functionality predication methods, computer device, and storage medium
CN108985198A (en) * 2018-07-02 2018-12-11 四川斐讯信息技术有限公司 A kind of COS distance calculation method based on big data feature vector
CN110738071A (en) * 2018-07-18 2020-01-31 浙江中正智能科技有限公司 face algorithm model training method based on deep learning and transfer learning
CN109145956A (en) * 2018-07-26 2019-01-04 上海慧子视听科技有限公司 Methods of marking, device, computer equipment and storage medium
CN109145956B (en) * 2018-07-26 2021-12-14 上海慧子视听科技有限公司 Scoring method, scoring device, computer equipment and storage medium
CN108960342A (en) * 2018-08-01 2018-12-07 中国计量大学 Based on the image similarity calculation method for improving SoftMax loss function
CN108960209A (en) * 2018-08-09 2018-12-07 腾讯科技(深圳)有限公司 Personal identification method, device and computer readable storage medium
CN108960209B (en) * 2018-08-09 2023-07-21 腾讯科技(深圳)有限公司 Identity recognition method, identity recognition device and computer readable storage medium
CN109165589B (en) * 2018-08-14 2021-02-23 北京颂泽科技有限公司 Vehicle weight recognition method and device based on deep learning
CN109165589A (en) * 2018-08-14 2019-01-08 北京颂泽科技有限公司 Vehicle based on deep learning recognition methods and device again
CN109145991A (en) * 2018-08-24 2019-01-04 北京地平线机器人技术研发有限公司 Image group generation method, image group generating means and electronic equipment
CN109145991B (en) * 2018-08-24 2020-07-31 北京地平线机器人技术研发有限公司 Image group generation method, image group generation device and electronic equipment
CN109271877A (en) * 2018-08-24 2019-01-25 北京智芯原动科技有限公司 A kind of human figure identification method and device
CN110874602A (en) * 2018-08-30 2020-03-10 北京嘀嘀无限科技发展有限公司 Image identification method and device
CN109344740A (en) * 2018-09-12 2019-02-15 上海了物网络科技有限公司 Face identification system, method and computer readable storage medium
CN109359541A (en) * 2018-09-17 2019-02-19 南京邮电大学 A kind of sketch face identification method based on depth migration study
CN109543524A (en) * 2018-10-18 2019-03-29 同盾控股有限公司 A kind of image-recognizing method, device
CN109214361A (en) * 2018-10-18 2019-01-15 康明飞(北京)科技有限公司 A kind of face identification method and device and ticket verification method and device
CN109492583A (en) * 2018-11-09 2019-03-19 安徽大学 A kind of recognition methods again of the vehicle based on deep learning
CN109685106A (en) * 2018-11-19 2019-04-26 深圳博为教育科技有限公司 A kind of image-recognizing method, face Work attendance method, device and system
CN109522850B (en) * 2018-11-22 2023-03-10 中山大学 Action similarity evaluation method based on small sample learning
CN109522850A (en) * 2018-11-22 2019-03-26 中山大学 A kind of movement similarity estimating method based on small-sample learning
CN109685121B (en) * 2018-12-11 2023-07-18 中国科学院苏州纳米技术与纳米仿生研究所 Training method of image retrieval model, image retrieval method and computer equipment
CN109685121A (en) * 2018-12-11 2019-04-26 中国科学院苏州纳米技术与纳米仿生研究所 Training method, image search method, the computer equipment of image encrypting algorithm
CN111325223B (en) * 2018-12-13 2023-10-24 中国电信股份有限公司 Training method and device for deep learning model and computer readable storage medium
CN111325223A (en) * 2018-12-13 2020-06-23 中国电信股份有限公司 Deep learning model training method and device and computer readable storage medium
CN109711443A (en) * 2018-12-14 2019-05-03 平安城市建设科技(深圳)有限公司 Floor plan recognition methods, device, equipment and storage medium neural network based
CN109815801A (en) * 2018-12-18 2019-05-28 北京英索科技发展有限公司 Face identification method and device based on deep learning
WO2020125404A1 (en) * 2018-12-19 2020-06-25 北京世纪好未来教育科技有限公司 Method and apparatus for constructing neural network and computer-readable medium
CN109711358A (en) * 2018-12-28 2019-05-03 四川远鉴科技有限公司 Neural network training method, face identification method and system and storage medium
WO2020147408A1 (en) * 2019-01-16 2020-07-23 平安科技(深圳)有限公司 Facial recognition model evaluation method and apparatus, and storage medium and computer device
CN111461152A (en) * 2019-01-21 2020-07-28 同方威视技术股份有限公司 Cargo detection method and device, electronic equipment and computer readable medium
CN111461152B (en) * 2019-01-21 2024-04-05 同方威视技术股份有限公司 Cargo detection method and device, electronic equipment and computer readable medium
CN109886186A (en) * 2019-02-18 2019-06-14 上海骏聿数码科技有限公司 A kind of face identification method and device
CN111723222A (en) * 2019-03-19 2020-09-29 Sap欧洲公司 Image search and training system
CN109948568A (en) * 2019-03-26 2019-06-28 东华大学 Embedded human face identifying system based on ARM microprocessor and deep learning
CN110147732A (en) * 2019-04-16 2019-08-20 平安科技(深圳)有限公司 Refer to vein identification method, device, computer equipment and storage medium
CN111832364A (en) * 2019-04-22 2020-10-27 普天信息技术有限公司 Face recognition method and device
CN111832364B (en) * 2019-04-22 2024-04-23 普天信息技术有限公司 Face recognition method and device
CN110147833A (en) * 2019-05-09 2019-08-20 北京迈格威科技有限公司 Facial image processing method, apparatus, system and readable storage medium storing program for executing
CN110213660A (en) * 2019-05-27 2019-09-06 广州荔支网络技术有限公司 Distribution method, system, computer equipment and the storage medium of program
CN110213660B (en) * 2019-05-27 2021-08-20 广州荔支网络技术有限公司 Program distribution method, system, computer device and storage medium
CN110674688A (en) * 2019-08-19 2020-01-10 深圳力维智联技术有限公司 Face recognition model acquisition method, system and medium for video monitoring scene
CN110674688B (en) * 2019-08-19 2023-10-31 深圳力维智联技术有限公司 Face recognition model acquisition method, system and medium for video monitoring scene
CN112580406A (en) * 2019-09-30 2021-03-30 北京中关村科金技术有限公司 Face comparison method and device and storage medium
WO2021073150A1 (en) * 2019-10-16 2021-04-22 平安科技(深圳)有限公司 Data detection method and apparatus, and computer device and storage medium
CN110765933A (en) * 2019-10-22 2020-02-07 山西省信息产业技术研究院有限公司 Dynamic portrait sensing comparison method applied to driver identity authentication system
CN110929099A (en) * 2019-11-28 2020-03-27 杭州趣维科技有限公司 Short video frame semantic extraction method and system based on multitask learning
CN111062338B (en) * 2019-12-19 2023-11-17 厦门商集网络科技有限责任公司 License and portrait consistency comparison method and system
CN111062338A (en) * 2019-12-19 2020-04-24 厦门商集网络科技有限责任公司 Certificate portrait consistency comparison method and system
CN111178249A (en) * 2019-12-27 2020-05-19 杭州艾芯智能科技有限公司 Face comparison method and device, computer equipment and storage medium
CN111368644A (en) * 2020-02-14 2020-07-03 深圳市商汤科技有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN111368644B (en) * 2020-02-14 2024-01-05 深圳市商汤科技有限公司 Image processing method, device, electronic equipment and storage medium
CN111401257A (en) * 2020-03-17 2020-07-10 天津理工大学 Non-constraint condition face recognition method based on cosine loss
CN111401257B (en) * 2020-03-17 2022-10-04 天津理工大学 Face recognition method based on cosine loss under non-constraint condition
CN111401277A (en) * 2020-03-20 2020-07-10 深圳前海微众银行股份有限公司 Face recognition model updating method, device, equipment and medium
CN113538075A (en) * 2020-04-14 2021-10-22 阿里巴巴集团控股有限公司 Data processing method, model training method, device and equipment
CN112949780A (en) * 2020-04-21 2021-06-11 佳都科技集团股份有限公司 Feature model training method, device, equipment and storage medium
CN111709313A (en) * 2020-05-27 2020-09-25 杭州电子科技大学 Pedestrian re-identification method based on local and channel combination characteristics
CN111709313B (en) * 2020-05-27 2022-07-29 杭州电子科技大学 Pedestrian re-identification method based on local and channel combination characteristics
CN112052821A (en) * 2020-09-15 2020-12-08 浙江智慧视频安防创新中心有限公司 Fire fighting channel safety detection method, device, equipment and storage medium
CN112052821B (en) * 2020-09-15 2023-07-07 浙江智慧视频安防创新中心有限公司 Fire-fighting channel safety detection method, device, equipment and storage medium
CN112116021A (en) * 2020-09-27 2020-12-22 广州华多网络科技有限公司 Gem similarity measurement data processing method and related equipment
CN112492383A (en) * 2020-12-03 2021-03-12 珠海格力电器股份有限公司 Video frame generation method and device, storage medium and electronic equipment
CN113065495B (en) * 2021-04-13 2023-07-14 深圳技术大学 Image similarity calculation method, target object re-recognition method and system
CN113065495A (en) * 2021-04-13 2021-07-02 深圳技术大学 Image similarity calculation method, target object re-identification method and system
CN113283359A (en) * 2021-06-02 2021-08-20 万达信息股份有限公司 Authentication method and system for handheld certificate photo and electronic equipment
CN113344875A (en) * 2021-06-07 2021-09-03 武汉象点科技有限公司 Abnormal image detection method based on self-supervision learning
CN113269155A (en) * 2021-06-28 2021-08-17 苏州市科远软件技术开发有限公司 End-to-end face recognition method, device, equipment and storage medium
CN113486804B (en) * 2021-07-07 2024-02-20 科大讯飞股份有限公司 Object identification method, device, equipment and storage medium
CN113486804A (en) * 2021-07-07 2021-10-08 科大讯飞股份有限公司 Object identification method, device, equipment and storage medium
CN113642468A (en) * 2021-08-16 2021-11-12 中国银行股份有限公司 Identity authentication method and device
CN113688793A (en) * 2021-09-22 2021-11-23 万章敏 Training method of face model and face recognition system
WO2023093838A1 (en) * 2021-11-25 2023-06-01 北京字跳网络技术有限公司 Super-resolution image processing method and apparatus, and device and medium
US20230169792A1 (en) * 2021-11-26 2023-06-01 GenkiTek Technology Co., Ltd. System and method of mode selection face recognition with parallel cnns
US12080101B2 (en) * 2021-11-26 2024-09-03 GenkiTek Technology Co., Ltd. System and method of mode selection face recognition with parallel CNNS
CN114387457A (en) * 2021-12-27 2022-04-22 腾晖科技建筑智能(深圳)有限公司 Face intra-class interval optimization method based on parameter adjustment
CN114283483A (en) * 2021-12-31 2022-04-05 中国联合网络通信集团有限公司 Face recognition method and device
EP4383212A1 (en) * 2022-12-06 2024-06-12 Bundesdruckerei GmbH Authentication device and method for authenticating a person by means of an identification document associated with the person, and identification document and method for producing
CN116127298A (en) * 2023-02-22 2023-05-16 北京邮电大学 Small sample radio frequency fingerprint identification method based on triplet loss
CN116127298B (en) * 2023-02-22 2024-03-19 北京邮电大学 Small sample radio frequency fingerprint identification method based on triplet loss
CN118337532A (en) * 2024-06-13 2024-07-12 浙江鹏信信息科技股份有限公司 Zero trust-based traffic safety audit protection method and system and readable medium

Also Published As

Publication number Publication date
WO2019128367A1 (en) 2019-07-04
CN108009528B (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN108009528A (en) Face authentication method, device, computer equipment and storage medium based on Triplet Loss
KR102125379B1 (en) Apparatus and method for verifying identification based on deep learning
WO2020155939A1 (en) Image recognition method and device, storage medium and processor
CN109784186A (en) A kind of pedestrian recognition methods, device, electronic equipment and computer readable storage medium again
CN108182409A (en) Biopsy method, device, equipment and storage medium
CN106022317A (en) Face identification method and apparatus
CN104820924B (en) A kind of online safety payment system based on handwriting verification
CN108304858A (en) Fight specimen discerning model generating method, verification method and its system
CN105844206A (en) Identity authentication method and identity authentication device
CN109284733A (en) A kind of shopping guide&#39;s act of omission monitoring method based on yolo and multitask convolutional neural networks
CN104504362A (en) Face detection method based on convolutional neural network
CN109902202A (en) A kind of video classification methods and device
CN109886222A (en) Face identification method, neural network training method, device and electronic equipment
Longjam et al. Writer independent handwritten signature verification on multi-scripted signatures using hybrid CNN-BiLSTM: A novel approach
US20190147218A1 (en) User specific classifiers for biometric liveness detection
CN106709418A (en) Face identification method based on scene photo and identification photo and identification apparatus thereof
CN109635643A (en) A kind of fast human face recognition based on deep learning
CN110008664A (en) Authentication information acquisition, account-opening method, device and electronic equipment
CN109344709A (en) A kind of face generates the detection method of forgery image
CN110008861A (en) A kind of recognition methods again of the pedestrian based on global and local feature learning
Alfarsi et al. Techniques for face verification: Literature review
Xie et al. Writer-independent online signature verification based on 2D representation of time series data using triplet supervised network
Kumar et al. Enhancing Fingerprint Security Using CNN for Robust Biometric Authentication and Spoof Detection
Goud et al. Smart attendance notification system using SMTP with face recognition
CN115457622A (en) Method, system and equipment for detecting deeply forged faces based on identity invariant features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Face authentication method, device, computer device and storage medium based on triplet loss

Effective date of registration: 20210621

Granted publication date: 20200407

Pledgee: Bank of China Limited by Share Ltd. Guangzhou Tianhe branch

Pledgor: GRG Banking Equipment Co.,Ltd.

Registration number: Y2021980004993

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230302

Granted publication date: 20200407

Pledgee: Bank of China Limited by Share Ltd. Guangzhou Tianhe branch

Pledgor: GRG BANKING EQUIPMENT Co.,Ltd.

Registration number: Y2021980004993