CN110263632A - Palm grain identification method and device - Google Patents

Palm grain identification method and device Download PDF

Info

Publication number
CN110263632A
CN110263632A CN201910390272.XA CN201910390272A CN110263632A CN 110263632 A CN110263632 A CN 110263632A CN 201910390272 A CN201910390272 A CN 201910390272A CN 110263632 A CN110263632 A CN 110263632A
Authority
CN
China
Prior art keywords
palmmprint
image
user
sample
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910390272.XA
Other languages
Chinese (zh)
Inventor
惠慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910390272.XA priority Critical patent/CN110263632A/en
Publication of CN110263632A publication Critical patent/CN110263632A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1347Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to biometrics identification technology fields.The embodiment of the present invention provides a kind of palm grain identification method and device, wherein the palm grain identification method includes: to obtain manpower image to be identified;Extract palmmprint area image corresponding to the manpower image;Based on target user ID corresponding to palmmprint area image described in palmmprint Neural Network model predictive, wherein the palmmprint neural network model is to be trained with the triple samples for corresponding to User ID, and triple samples include having multiple training palmmprint area images of otherness under same User ID and as positive palmmprint sample and the training palmmprint area image for the other users being not correspond to the User ID and as negative palmmprint sample.Palmmprint is identified by the neural network model of the secondary fine tuning training of positive negative sample as a result, the different user individual of the similar palmprint image there is only nuance can be identified and distinguished between.

Description

Palm grain identification method and device
Technical field
The present invention relates to biometrics identification technology fields, more particularly to a kind of palm grain identification method and device.
Background technique
Living things feature recognition as emerging authentication techniques, be at present in the world most the new and high technology of prospect it One, belong to international advanced subject.Personal recognition is an important branch of biometrics identification technology.
Everyone palmmprint lines has deeply and has shallowly, and the noise of palm blood color also can be especially big, although for small sample Crowd in range, the identification based on lightweight neural network have compared with high-accuracy;But when identification crowd's target is larger When, it is easy to appear between Different Individual and the case where similar palmmprint occurs, and general lightweight neural network is caused to cannot be distinguished Similar palmmprint causes personal recognition ineffective.
Therefore, how the similar image of palmmprint is distinguished and be identified is current industry technical problem urgently to be resolved.
Summary of the invention
The purpose of the embodiment of the present invention is that a kind of palm grain identification method and device are provided, it is similar to palmmprint at least to realize Image identified and distinguished between.
To achieve the goals above, on the one hand the embodiment of the present invention provides a kind of palm grain identification method, comprising: obtains wait know Other manpower image;Extract palmmprint area image corresponding to the manpower image;Based on palmmprint Neural Network model predictive institute Target user ID corresponding to palmmprint area image is stated, wherein the palmmprint neural network model is three to correspond to User ID Same is trained, and triple samples include multiple training palmmprint regions under same User ID with otherness Image and as positive palmmprint sample and the training palmmprint area image for the other users being not correspond to the User ID And as negative palmmprint sample.
On the other hand the embodiment of the present invention provides a kind of personal recognition device, comprising: manpower image acquisition unit, for obtaining Take manpower image to be identified;Palmmprint area extracting unit, for extracting palmmprint area image corresponding to the manpower image; Predicting unit, for based on target user ID corresponding to palmmprint area image described in palmmprint Neural Network model predictive, wherein The palmmprint neural network model is to be trained with the triple samples for corresponding to User ID, and triple samples include Under same User ID with otherness multiple training palmmprint area images and as positive palmmprint sample and with the use The training palmmprint area image for the other users that family ID is not correspond to and as negative palmmprint sample.
On the other hand the embodiment of the present invention provides a kind of computer equipment, including memory and processor, the memory It is stored with computer program, wherein the processor realizes the step of the above-mentioned method of the application when executing the computer program Suddenly.
On the other hand the embodiment of the present invention provides a kind of computer storage medium, be stored thereon with computer program, wherein The computer program realizes the step of the application above-mentioned method when being executed by processor.
Palmprint recognition technology and nerual network technique are combined through the above technical solutions, proposing, it can be quickly quasi- Really identify the target user to match with palmmprint detected;In addition, since palmmprint neural network model is to correspond to What triple samples with positive palmmprint sample and negative palmmprint sample of User ID were trained, and it is secondary micro- by positive negative sample Adjust training so that palmmprint neural network identify for the function of the palmmprint of Different Individual and same individual can be more powerful, Neng Gou It is only distinguished between the palmprint image with nuance, and there are very extensive application scenarios.
The other feature and advantage of the embodiment of the present invention will the following detailed description will be given in the detailed implementation section.
Detailed description of the invention
Attached drawing is to further understand for providing to the embodiment of the present invention, and constitute part of specification, under The specific embodiment in face is used to explain the present invention embodiment together, but does not constitute the limitation to the embodiment of the present invention.Attached In figure:
Fig. 1 is the flow chart of the palm grain identification method of one embodiment of the invention;
Fig. 2 is that palmmprint area image corresponding to manpower image is extracted in the palm grain identification method of one embodiment of the invention Process principle figure;
Fig. 3 is the process being trained in the palm grain identification method of one embodiment of the invention for palmmprint neural network model Schematic diagram;
Fig. 4 is the principle process schematic diagram of the palm grain identification method of one embodiment of the invention;
Fig. 5 A is corresponding to the first User ID that can identify of palm grain identification method using one embodiment of the invention The example of palmmprint area image;
Fig. 5 B is corresponding to the second user ID that can identify of palm grain identification method using one embodiment of the invention The example of palmmprint area image similar with the first User ID;
Fig. 6 is the structural block diagram of the personal recognition device of one embodiment of the invention;
Fig. 7 is the structural block diagram of the personal recognition device of another embodiment of the present invention;
Fig. 8 is the structural block diagram of the entity apparatus configured with personal recognition device of one embodiment of the invention.
Specific embodiment
It is described in detail below in conjunction with specific embodiment of the attached drawing to the embodiment of the present invention.It should be understood that this Locate described specific embodiment and be merely to illustrate and explain the present invention embodiment, is not intended to restrict the invention embodiment.
As shown in Figure 1, the palm grain identification method of one embodiment of the invention, comprising:
S11, manpower image to be identified is obtained.
Subject of implementation about present invention method, on the one hand, it can be the dedicated collection for being exclusively used in personal recognition At component, private server or special-purpose terminal etc.;On the other hand, universal server or terminal be can also be, wherein this is logical Can be with type server or terminal (such as smart phone, tablet computer etc.) be equipped with module for carrying out personal recognition or It is belonged in protection scope of the present invention configured with the program code for personal recognition, and above.
About the acquisition modes of manpower image, it can be and the camera of terminal is called to acquire manpower image, it can also be with It is the manpower image that terminal or server are uploaded received from bottom.As an example, can be user by operating terminal (such as Touch mobile phone screen), so that triggering terminal acquires manpower image to be identified, specifically the operation performed by terminal be may is that Firstly, receiving user's operation, and generate corresponding personal recognition request;Then, it is requested according to personal recognition, starts terminal camera Module is to acquire manpower image to be identified.
S12, palmmprint area image corresponding to manpower image is extracted.
Wherein, manpower image may include finger and palm, and only need to be preserved for slapping during personal recognition The palmmprint region of line identification, therefore the pretreatment operation by extracting palmmprint area image can reduce subsequent personal recognition Workload.
Specifically, can be the palmmprint key point determined in manpower image based on critical point detection technology, it is then based on the palm Line key point constructs palmmprint region, and then corresponding palmmprint area image is partitioned into from manpower image.Implement in the present invention In example, the concrete mode of palmmprint critical point detection technology is answered herein it is without restriction, such as its can be based on openpose or The palmmprint key point identification technology of caffe.
Preferably, it can also be through following process as shown in Figure 2 and extract palmmprint area corresponding to manpower image Area image: it S121, is based on critical point detection technology, determines the palmmprint key point in manpower image;S122, it is based on palmmprint key Point is partitioned into corresponding preliminary palmmprint line from manpower image;The position and direction of S123, the preliminary palmmprint line of resampling, and it is right There is the angle information in crosspoint in preliminary palmmprint line and direction extracts and reinforces;S124, according to the reinforced preliminary palm Streakline generates palmmprint area image.Therefore, by preliminary palmmprint line corresponding to resampling key point technology, to the angle of palmmprint The key feature information in degree and direction is reinforced, to ensure the palmmprint area image that can obtain high quality.
S13, based on target user ID corresponding to palmmprint Neural Network model predictive palmmprint area image (Identification, mark), wherein the palmmprint neural network model is instructed with the triple samples for corresponding to User ID Experienced, and triple samples include having multiple training palmmprint area images of otherness under same User ID and as just Palmmprint sample and the training palmmprint area image for the other users being not correspond to the User ID and as negative palmmprint sample This.
In embodiments of the present invention, palmprint recognition technology and nerual network technique are combined, can be rapidly and accurately known The target user that Chu not match with palmmprint detected;In addition, since palmmprint neural network model is to correspond to User ID Triple samples with positive palmmprint sample and negative palmmprint sample be trained, therefore pass through triple loss (triplet Loss) compensation neural network model is to identify palmmprint, and carries out secondary fine tuning training using positive negative sample, so that palmmprint is refreshing Can be more powerful for the identification of the function of Different Individual and the palmmprint of same individual through network, can only have nuance It is distinguished between palmprint image.
It should be noted that different user ID involved in triple samples can be what difference arbitrarily or was especially selected.But It is that the correlation of triple samples for training pattern is to exist centainly for effect acquired by triple loss training pattern It influences and the degree of association, that is to say, that if similarity between selected multiple positive samples is too big or positive sample and negative sample Similarity between this is too small, may not all play the effect of good secondary fine tuning training.
Therefore, as shown in figure 3, one embodiment of the invention additionally provides the training process for palmmprint neural network model, Include:
S31, the different palmmprint area images under multi-user ID are embedded in Euclidean space, wherein in Euclid sky The palmmprint area image of interior single individual and other palmmprint area images distance of the individual are close, and the palmmprint with other individuals Image distance is remote.
Specifically, can be, it is first determined corresponding to the different palmmprint area images under multi-user ID:
Then Euclidean space transformation is carried out to different palmmprint area images, is embedded in the palmmprint area of Euclidean space Area image meets following condition:
Wherein, f (x) indicates Euclidean space distance function,Indicate the first User ID of correspondence of pending training Palmmprint area imageIndicate the different palmmprint area images under selected first User ID,Indicate it is selected not It is same as the palmmprint area image of the first User ID.
In the present embodiment, by the way that palmmprint area image is carried out Euclidean space transformation, to be used to instruct by each Characteristic similarity between experienced palmmprint area image is quantified in a manner of Euclidean space distance, and same User ID Under palmmprint area image between Euclidean space apart from typically small, and the palmmprint area image under different User ID it Between Euclidean space apart from generally large.
Euclidean space distance is lower than palmmprint area corresponding when the first given threshold under S32, the same User ID of screening Area image is as positive palmmprint sample set.
Euclidean space distance is greater than palmmprint area corresponding when the second given threshold under S33, screening different user ID Area image is as negative palmmprint sample set.
S34, according to positive palmmprint sample set and negative palmmprint sample set, training palmmprint neural network model.
Wherein, the first given threshold and the second given threshold, which can be, integrates determination by many experiments and verifying in advance , therefore should be not limited herein.Therefore, the lower positive palm of similarity between positive palmmprint sample is filtered out by threshold value comparison method The higher palmmprint sample set of similarity between line sample set and negative palmmprint sheet, to ensure the triple samples tool screened There are good accuracy and coverage rate, has ensured the effect of the secondary fine tuning training to palmmprint neural network model.
In some embodiments, it since the amount of training data of palmmprint neural network needs is too big, can be using batch Change the mode of screening and mass training to be trained to palmmprint neural network;For example, for training palmmprint neural network Palmmprint area image has 100,000, can be 100 batches of setting, and data volume corresponding to the training of each batch is 1000 ?.Correspondingly, the screening operation for triple samples is to be carried out in a manner of batch processing, and the screening Jing Guo batch processing is grasped Work can generate corresponding micro- batch palmmprint sample, wherein micro- batch palmmprint sample includes the positive palmmprint being labelled with for User ID Sample and negative palmmprint sample, and then can be according to micro- batch palmmprint sample, batch instruction is carried out to palmmprint neural network model Practice.
The palmmprint area image for corresponding to batch processing quantity is determined as micro- batch processed image specifically, can be Collection;Then, argmin and argmax corresponding to micro- batch processed image subset is calculated, whereinAndLater, according to argmin and argmax, Such as can be and be compared argmin with the second setting value, argmax is compared with the first setting value, to screen Corresponding micro- batch palmmprint sample out.Therefore, can select in micro- batch processed image subset between different user similarity compared with The lower image of similarity between high image and same subscriber, and as micro- batch palmmprint sample, to realize Second training during model training and play compensation effect;As a result, housebroken palmmprint neural network model can Same people's palm print characteristics differ biggish palmmprint area image respectively, and can preferably differentiate more similar between different people Palmmprint area image.
As shown in figure 4, the principle process of the palm grain identification method of one embodiment of the invention, discloses in embodiments of the present invention Based on the method for triple loss (triplet loss) compensation neural network model identification palmmprint, wherein mainly include benefit Secondary fine tuning training is carried out to palmmprint neural network model with the method for triple compensating for loss and damage, so that palmmprint nerve net Network can be more powerful for the identification of the function of Different Individual and the palmmprint of same individual, can be only with the palmmprint of nuance It is distinguished between image.
It has been applied to neural network triplet model in the present embodiment, has also related to neural network model Training stage and application stage, especially in the embodiment of the present invention be directed to neural network model training stage.
(1) training stage of triplet model
S41, palmmprint area image and palm print characteristics information are extracted.
Specifically, may include:
1) training manpower image is obtained;
Specifically, can be by collect by camera (such as camera of mobile phone) take pictures caused by figure relevant to manpower Picture specifically can be manually shooting or be also possible to from internet obtained from keyword search downloading etc.;Then, Manpower region is identified by object recognition technique (such as semantic segmentation model), and the manpower region in image is cut To obtain manpower image.
2) manpower image is pre-processed, to obtain palmmprint area image corresponding to manpower image and palmmprint line characteristic information;
Specifically, finding out the palmmprint key point information in manpower by critical point detection technology, and closed according to the palmmprint Key point information determines palmmprint region corresponding to manpower image, to obtain palmmprint area image corresponding to manpower image. Then, enhancing processing is carried out to obtained palmmprint area image, processed palmmprint area image is stored to palmmprint library.Make For example, it can also be with the method for threshold value and obtain palmmprint line image, then obtained by the resampling to palmmprint line image To the position and direction of these points, to construct preliminary palmmprint line, it is preferable that also need crosspoint occur in preliminary palmmprint line Angle information and direction extract and reinforce, to reinforce the characteristic information of palmmprint line.
S42, triplet model is trained based on triple compensating for loss and damage.
In embodiments of the present invention, the same people in a pile source is chosen, then chooses the palmmprint for deriving from different people, is constituted One triple, by carrying out secondary fine tuning training to combined selection.As an example, choosing 20 people every time in training Palmmprint, everyone chooses 4 palmmprint pictures, and the tuple constituted every time in this way has 4*3/2*19*4 tuple, uses triplet's Medium difficulty screening technique filters out one batch processing (batch) of suitable tuple composition and is trained.
Wherein it is possible to be by the Euclidean space of palmmprint area image X insertion d dimension.It is converted by the vector space, Its other image x distance for exporting the palmprint image and the individual that can guarantee single individual closely, with the images of other individuals away from From remote.
The selection of triple samples (triplets) is extremely important to the convergence of model, should meet following requirement:
ForNeed to select the different pictures of same individualMake that there is certain difference between these pictures Property, it is also desirable to select the picture of Different IndividualIn hands-on, argmin is calculated across all training samples (set of all independents variable corresponding to dependent variable minimum value) and argmax (all independents variable corresponding to dependent variable maximum value Set) be it is unpractical, can also be caused due to error label image train convergence difficulties.
Wherein,
Therefore, in hands-on, the screening of sample can be carried out in the following way: being walked every n, calculated subset Argmin and argmax generates triplets online, i.e., carries out screening positive/negative sample in each small-sized batch processing mini-batch This (positive/negative sample).Here, we prefer that using the online method for generating triplets.We select The mini-batch (1800 samples/batch) of large sample increases the sample size of each batch.In each mini-batch, We open palmmprint picture as positive sample to single individual choice 40, and the other palmmprint pictures of random screening are as negative sample.Negative sample This selection is improper to may also lead to training too early into Local Minimum.
In order to avoid we help to screen negative sample using following formula:
Palmmprint picture is opened as positive sample to single individual choice 40, the other face pictures of random screening are as negative sample. Negative sample selection is improper to may also lead to training too early into Local Minimum, and palmprint image is directly embedded in Euclid as a result, The method in space, facilitates training process.
Specifically, being trained as follows after completing samples selection: micro- batch (mini-batch) palmmprint sample is defeated Enter into neural network to carry out batchization training, wherein micro- batch palmmprint sample includes the positive palmmprint being labelled with for User ID Sample and negative palmmprint sample;During batchization training, be by User ID and positive palmmprint sample and negative palmmprint sample all It has carried out association to sort out, has enabled and similar palmmprint is distinguished when triplet model by positive and negative sample training, it can be very Different User ID is distinguished well.
(2) application stage of triplet model
Based on the application of triple compensating for loss and damage neural network model, the difference between similar palmmprint individual can be differentiated Property, realize the identification of the different palmprint images small to gap.Also, triple compensating for loss and damage neural network model is for feature The data volume requirement of sample is also lower, such as can be tag file size being reduced to 128 floating-points from 50176 floating numbers Number, it is smaller to request memory, it can be useful in mobile phone and apply, such as can be integrated in cell phone application and be used.
S43, user's operation call terminal camera module to carry out human body palm image of taking pictures to palm.
Specifically, S43 can be corresponding to different application scenarios, such as it can be user and open cell phone application, and pass through Specific user's operation calls corresponding terminal camera module.
S44, pretreatment palm image, to obtain palmmprint area image corresponding to manpower palm image and palm streakline feature letter Breath.
About the details of S44, the associated description being referred in S41 above.
S45, triplet model is adjusted, palm print characteristics information indicated by the palmmprint area image is identified.
It is slapped based on the target that palmmprint indicated by triplet model prediction and the palmmprint area image to be identified matches Line.Specifically, what triple compensating for loss and damage neural network most matched therewith according to palmmprint line characteristic information to be identified derivation Palmmprint line, and determine it as target palmmprint line.
In embodiments of the present invention, using the triplet model of triple compensating for loss and damage, there is Different Individual abundant The experience of sample and multiple positive and negative sample trainings of same individual, can be realized between similar palmprint image identification and area Point, it can be realized the identification and differentiation to the similar palmmprint of Different Individual as shown in Figure 5 A and 5B.In addition, triplet model Requirement very little of the applied characteristic image to memory has very extensive application scenarios, such as can be applicable in mobile phone.
As shown in fig. 6, the personal recognition device of one embodiment of the invention, comprising:
Manpower image acquisition unit 601, for obtaining manpower image to be identified;
Palmmprint area extracting unit 602, for extracting palmmprint area image corresponding to the manpower image;
Predicting unit 603, for based on target corresponding to palmmprint area image described in palmmprint Neural Network model predictive User ID, wherein the palmmprint neural network model is trained with the triple samples for corresponding to User ID, and it is described Triple samples include under same User ID with otherness multiple training palmmprint area images and as positive palmmprint sample, And the other users being not correspond to the User ID training palmmprint area image and as negative palmmprint sample.
In specific application scenarios, as shown in fig. 7, the device further includes training unit 604, which is used In the different palmmprint area images under multi-user ID are embedded in Euclidean space, wherein single in the Euclidean space The palmmprint area image of body and other palmmprint area images distance of the individual are close, and the palmprint image distance with other individuals Far;Screen palmmprint area image conduct corresponding when Euclidean space distance is lower than the first given threshold under same User ID Positive palmmprint sample set;It screens Euclidean space distance under different user ID and is greater than palmmprint corresponding when the second given threshold Area image is as negative palmmprint sample set;According to the positive palmmprint sample set and the negative palmmprint sample set, training institute State palmmprint neural network model.
Preferably, the screening operation for triple samples is to be carried out in a manner of batch processing, and pass through batch processing Screening operation can generate corresponding micro- batch palmmprint sample, wherein micro- batch palmmprint sample includes to be labelled with for User ID Positive palmmprint sample and negative palmmprint sample, the training unit 604 are also used to according to micro- batch palmmprint sample, to the palmmprint Neural network model carries out batchization training.
Preferably, the palmmprint area image for being embedded in Euclidean space meets the following conditions:
Wherein, f (x) indicates Euclidean space distance function,Indicate the palmmprint of the first User ID of pending training Area imageIndicate the different palmmprint area images under selected first User ID,Indicate it is selected not It is same as the palmmprint area image of first User ID.
The training unit 604 is also used to execute the screening operation including following batch processing: will correspond to batch processing number The palmmprint area image of amount is determined as micro- batch processed image subset;It calculates corresponding to micro- batch processed image subset Argmin and argmax, whereinAndAccording to The argmin and argmax filters out corresponding micro- batch palmmprint sample.
The palmmprint area extracting unit 602 is also used to determine in the manpower image based on critical point detection technology Palmmprint key point;Based on the palmmprint key point, corresponding preliminary palmmprint line is partitioned into from the manpower image;Resampling institute State the position and direction of preliminary palmmprint line, and to there is the angle information in crosspoint in the preliminary palmmprint line and direction mentions It takes and reinforces;According to reinforced preliminary palmmprint line, the palmmprint area image is generated.
The manpower image acquisition unit 601 is also used to receive user's operation, and generates corresponding personal recognition request;Root It is requested according to the personal recognition, starts terminal camera module to acquire manpower image to be identified.
It should be noted that other of each functional unit involved by a kind of personal recognition device provided in an embodiment of the present invention Corresponding description, can be with reference to the corresponding description in Fig. 1-5B, and details are not described herein.
Based on above-mentioned method as shown figs. 1-5b, correspondingly, the embodiment of the invention also provides a kind of storage equipment, thereon It is stored with computer program, which realizes above-mentioned palm grain identification method as shown figs. 1-5b when being executed by processor.
Based on the embodiment of above-mentioned method as shown figs. 1-5b and virtual bench as shown in Figure 6,7, in order to realize above-mentioned mesh , as shown in figure 8, the embodiment of the invention also provides a kind of entity apparatus 80 of personal recognition device, the entity apparatus 80 packet Include storage equipment 801 and processor 802;The storage equipment 801, for storing computer program;The processor 802 is used In the execution computer program to realize above-mentioned palm grain identification method as shown figs. 1-5b.
By applying the technical scheme of the present invention, palmprint recognition technology and nerual network technique are combined, it can be quick Accurately identify the target user to match with palmmprint detected;In addition, since palmmprint neural network model is with correspondence It is trained in triple samples with positive palmmprint sample and negative palmmprint sample of User ID, therefore lost by triple (triplet loss) compensates neural network model to identify palmmprint, and the positive negative sample of application carries out secondary fine tuning training, thus So that palmmprint neural network can be more powerful for the identification of the function of Different Individual and the palmmprint of same individual, can only have It is distinguished between the palmprint image of nuance, and there are very extensive application scenarios.
Through the above description of the embodiments, those skilled in the art can be understood that the application can lead to Hardware realization is crossed, the mode of necessary general hardware platform can also be added to realize by software.Based on this understanding, this Shen Technical solution please can be embodied in the form of software products, which can store in a non-volatile memories In medium (can be CD-ROM, USB flash disk, mobile hard disk etc.), including some instructions are used so that a computer equipment (can be Personal computer, server or network equipment etc.) execute method described in each implement scene of the application.
It will be appreciated by those skilled in the art that the accompanying drawings are only schematic diagrams of a preferred implementation scenario, module in attached drawing or Process is not necessarily implemented necessary to the application.
It will be appreciated by those skilled in the art that the module in device in implement scene can be described according to implement scene into Row is distributed in the device of implement scene, can also be carried out corresponding change and is located at the one or more dresses for being different from this implement scene In setting.The module of above-mentioned implement scene can be merged into a module, can also be further split into multiple submodule.
Above-mentioned the application serial number is for illustration only, does not represent the superiority and inferiority of implement scene.
Disclosed above is only several specific implementation scenes of the application, and still, the application is not limited to this, Ren Heben What the technical staff in field can think variation should all fall into the protection scope of the application.

Claims (10)

1. a kind of palm grain identification method characterized by comprising
Obtain manpower image to be identified;
Extract palmmprint area image corresponding to the manpower image;
Based on target user ID corresponding to palmmprint area image described in palmmprint Neural Network model predictive, wherein the palmmprint is refreshing It is to be trained with the triple samples for corresponding to User ID, and triple samples include same User ID through network model Multiple training palmmprint area images with otherness and not opposite as positive palmmprint sample and with the User ID down The training palmmprint area image for the other users answered and as negative palmmprint sample.
2. according to the method described in claim 1, wherein, this method further includes the training for the palmmprint neural network model Process, the training process for palmmprint neural network model include:
Different palmmprint area images under multi-user ID are embedded in Euclidean space;
Palmmprint area image corresponding when Euclidean space distance is lower than the first given threshold under same User ID is screened to make Be positive palmmprint sample set;
It screens Euclidean space distance under different user ID and is greater than palmmprint area image work corresponding when the second given threshold Be negative palmmprint sample set;
According to the positive palmmprint sample set and the negative palmmprint sample set, the training palmmprint neural network model.
3. according to the method described in claim 2, wherein, the screening operation for triple samples is carried out in a manner of batch processing , and the screening operation Jing Guo batch processing can generate corresponding micro- batch palmmprint sample, wherein micro- batch palmmprint sample includes There are the positive palmmprint sample and negative palmmprint sample being labelled with for User ID, it is described according to the positive palmmprint sample set and described negative Palmmprint sample set, the training palmmprint neural network model include:
According to micro- batch palmmprint sample, batchization training is carried out to the palmmprint neural network model.
4. according to the method described in claim 3, wherein, the different palmmprint area images for being embedded into Euclidean space are full Sufficient the following conditions:
Wherein, f (x) indicates Euclidean space distance function, xi aIndicate the palmmprint region of the first User ID of pending training Image, xi PIndicate the different palmmprint area images under selected first User ID, xi nIt indicates selected to be different from The palmmprint area image of first User ID.
5. according to the method described in claim 4, wherein, the screening operation of the batch processing includes:
The palmmprint area image for corresponding to batch processing quantity is determined as micro- batch processed image subset;
Calculate argmin and argmax corresponding to micro- batch processed image subset, whereinAnd
According to the argmin and the argmax, corresponding micro- batch palmmprint sample is filtered out.
6. described to extract palmmprint area image packet corresponding to the manpower image according to the method described in claim 1, wherein It includes:
Based on critical point detection technology, the palmmprint key point in the manpower image is determined;
Based on the palmmprint key point, corresponding preliminary palmmprint line is partitioned into from the manpower image;
The position and direction of preliminary palmmprint line described in resampling, and to there is the angle information in crosspoint in the preliminary palmmprint line It extracts and reinforces with direction;
According to reinforced preliminary palmmprint line, the palmmprint area image is generated.
7. according to the method described in claim 1, wherein, the acquisition manpower image to be identified includes:
User's operation is received, and generates corresponding personal recognition request;
It is requested according to the personal recognition, starts terminal camera module to acquire manpower image to be identified.
8. a kind of personal recognition device characterized by comprising
Manpower image acquisition unit, for obtaining manpower image to be identified;
Palmmprint area extracting unit, for extracting palmmprint area image corresponding to the manpower image;
Predicting unit, for based on target user ID corresponding to palmmprint area image described in palmmprint Neural Network model predictive, Wherein the palmmprint neural network model is trained with the triple samples for corresponding to User ID, and triple samples Including under same User ID with otherness multiple training palmmprint area images and as positive palmmprint sample and with institute State the training palmmprint area image for the other users that User ID is not correspond to and as negative palmmprint sample.
9. a kind of computer equipment, which is characterized in that including memory and processor, the memory is stored with computer journey Sequence, wherein the step of processor realizes any one of claims 1 to 7 the method when executing the computer program.
10. a kind of computer storage medium, which is characterized in that be stored thereon with computer program, wherein the computer program The step of method described in any one of claims 1 to 7 is realized when being executed by processor.
CN201910390272.XA 2019-05-10 2019-05-10 Palm grain identification method and device Pending CN110263632A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910390272.XA CN110263632A (en) 2019-05-10 2019-05-10 Palm grain identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910390272.XA CN110263632A (en) 2019-05-10 2019-05-10 Palm grain identification method and device

Publications (1)

Publication Number Publication Date
CN110263632A true CN110263632A (en) 2019-09-20

Family

ID=67913046

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910390272.XA Pending CN110263632A (en) 2019-05-10 2019-05-10 Palm grain identification method and device

Country Status (1)

Country Link
CN (1) CN110263632A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132099A (en) * 2020-09-30 2020-12-25 腾讯科技(深圳)有限公司 Identity recognition method, palm print key point detection model training method and device
CN113515988A (en) * 2020-07-09 2021-10-19 腾讯科技(深圳)有限公司 Palm print recognition method, feature extraction model training method, device and medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515988A (en) * 2020-07-09 2021-10-19 腾讯科技(深圳)有限公司 Palm print recognition method, feature extraction model training method, device and medium
WO2022007559A1 (en) * 2020-07-09 2022-01-13 腾讯科技(深圳)有限公司 Palm print recognition method, feature extraction model training method, device and medium
CN113515988B (en) * 2020-07-09 2023-05-23 腾讯科技(深圳)有限公司 Palm print recognition method, feature extraction model training method, device and medium
CN112132099A (en) * 2020-09-30 2020-12-25 腾讯科技(深圳)有限公司 Identity recognition method, palm print key point detection model training method and device

Similar Documents

Publication Publication Date Title
Oh et al. Adversarial image perturbation for privacy protection a game theory perspective
US10521643B2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US8731249B2 (en) Face recognition using face tracker classifier data
US8081844B2 (en) Detecting orientation of digital images using face detection information
US8391645B2 (en) Detecting orientation of digital images using face detection information
US20160253359A1 (en) Efficient image matching for large sets of images
CN108376242A (en) For the characteristics of SSTA persistence descriptor of video
CN105956059A (en) Emotion recognition-based information recommendation method and apparatus
CN106056064A (en) Face recognition method and face recognition device
WO2007038612A2 (en) Apparatus and method for processing user-specified search image points
CN105678778B (en) A kind of image matching method and device
CN110263632A (en) Palm grain identification method and device
Li et al. Face anti-spoofing with deep neural network distillation
CN105069444B (en) A kind of gesture identifying device
CN109034002A (en) Entity book detection method and device
CN104765440B (en) Hand detection method and equipment
CN105407069B (en) Living body authentication method, apparatus, client device and server
CN111860346A (en) Dynamic gesture recognition method and device, electronic equipment and storage medium
US20120169860A1 (en) Method for detection of a body part gesture to initiate a web application
Su et al. Efficient and accurate face alignment by global regression and cascaded local refinement
CN109711287A (en) Face acquisition method and Related product
González et al. Towards refining ID cards presentation attack detection systems using face quality index
CN116361830A (en) Face recognition method, device and storage medium for secure encryption
JP4522229B2 (en) Image processing method and apparatus
Wang et al. Adversarial attack on fake-faces detectors under white and black box scenarios

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination