CN112200159A - Non-contact palm vein identification method based on improved residual error network - Google Patents

Non-contact palm vein identification method based on improved residual error network Download PDF

Info

Publication number
CN112200159A
CN112200159A CN202011379940.8A CN202011379940A CN112200159A CN 112200159 A CN112200159 A CN 112200159A CN 202011379940 A CN202011379940 A CN 202011379940A CN 112200159 A CN112200159 A CN 112200159A
Authority
CN
China
Prior art keywords
residual error
palm vein
roi
error network
palm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011379940.8A
Other languages
Chinese (zh)
Other versions
CN112200159B (en
Inventor
赵国栋
朱晓芳
李学双
张烜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Holy Point Century Technology Co ltd
Original Assignee
Sichuan Shengdian Century Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Shengdian Century Technology Co ltd filed Critical Sichuan Shengdian Century Technology Co ltd
Priority to CN202011379940.8A priority Critical patent/CN112200159B/en
Publication of CN112200159A publication Critical patent/CN112200159A/en
Application granted granted Critical
Publication of CN112200159B publication Critical patent/CN112200159B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a non-contact palm vein identification method based on an improved residual error network, which comprises the following steps: 1) acquiring two infrared images of the same palm; 2) positioning an ROI area of the palm; 3) registering the palm vein; 4) verifying the palm vein; 5) and (3) palm vein verification and judgment: setting an identification threshold T, calculating the distance between the characteristic vector and the registration template, if the distance is less than the set identification threshold T, determining that the palm vein authentication is successful, otherwise, failing to authenticate; the invention provides an improved residual error network structure, which can extract texture features of input samples in different scales during training, so that a trained model has the capability of extracting multi-scale texture information, extracted feature vectors better express the information of the input samples, and the problems of various zooming and translation of the input samples can be solved to a certain extent.

Description

Non-contact palm vein identification method based on improved residual error network
Technical Field
The invention belongs to the technical field of biological feature recognition and deep learning, and particularly relates to a non-contact palm vein recognition method based on an improved residual error network.
Background
The palm vein is a biological characteristic which is difficult to imitate in an organism, has rich, unique and stable identity information, has unique advantages in the aspect of safety performance, and has great market potential. The identity can be easily identified by using the biological characteristics of the palm vein and the like in a contact way; however, the use of the contact-type equipment depends on the use habit that the palm is in contact with the equipment, and the experience effect of the first-time user is often poor; when the contact type equipment is used in public places, the risk of disease infection caused by the contact of the equipment is increased, and the resistance psychology of people during use is easily caused; therefore, it is becoming a great trend to replace the traditional contact type with the non-contact biometric identification technology.
The existing palm vein identification method utilizes the characteristics of traditional texture, SIFT, multidirectional filtering and the like to identify the identity, for example, the palm vein identification method which is disclosed by Chinese patent with the patent number of CN201710222874.5 and integrates texture characteristics and scale invariant characteristics; however, for the vein image acquired in a non-contact manner, the problems of large-angle rotation, translation, scaling and the like exist, so that the recognition rate is not ideal, and the aim of recognizing the palms of multiple people cannot be achieved; some of the palm vein recognition uses a shallow convolutional neural network to extract palm vein features for identity recognition, but the extracted features are not enough in distinguishing capability, so that the generalization capability of the model is not good, and the recognition rate is not high.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a non-contact palm vein identification method based on an improved residual error network, and the problem that identification rate is not ideal due to poor generalization capability caused by the conditions of large-angle rotation, translation, scaling and the like of a non-contact collected vein image is solved.
In order to solve the technical problems, the technical scheme provided by the invention is as follows:
the invention relates to a non-contact palm vein identification method based on an improved residual error network, which comprises the following steps:
1) acquiring two infrared images of the same palm by using non-contact equipment, and recording the two infrared images as pic1 and pic 2;
2) locating the ROI area of the palm: respectively inputting the two infrared images pic1 and pic2 into a trained ROI detection deep learning model to respectively obtain corresponding ROI regional position information, and cutting out ROI images of the two infrared images according to the ROI regional position information of the two infrared images pic1 and pic 2;
3) palm vein registration: performing data enhancement on an ROI (region of interest) image of the infrared image pic1, inputting the data into a trained improved residual error network palm vein recognition model for feature extraction to obtain a feature vector, taking the feature vector as a registration template, and storing the feature vector into a template library;
4) palm vein verification: performing data enhancement on an ROI (region of interest) image of the infrared image pic2, inputting the data into a trained improved residual error network palm vein recognition model for feature extraction, and obtaining feature vectors;
5) and (3) palm vein verification and judgment: setting an identification threshold T, calculating the distance between the characteristic vector and the registration template, if the distance is less than the set identification threshold T, determining that the palm vein authentication is successful, otherwise, failing to authenticate;
in the step 3) and the step 4), the improved residual error network palm vein recognition model refers to a palm vein recognition model for modifying the composition structure of a residual error module and modifying the loss function of a residual error network;
the composition structure of the residual error modification module is that convolution mapping of 2 different scales is added on the basis of a ResNet50 network structure, namely, palm vein texture and edge information extraction of different scales are added, and the number of residual error units is reduced at the same time, so that the composition structure of the improved residual error module is obtained;
the loss function of the modified residual error network is that on the basis of the ArcFace loss function, a constraint item for the center in the class is added, so that the distance in the class is smaller, the multiplying coefficient of the angle is increased, and the size of the angle is regulated according to the input of the residual error unit and the distance between the classes, so that the improved loss function is obtained.
Preferably, the specific step of locating the ROI region of the palm in step 2) includes:
2.1) manually labeling the ROI of the image library containing the palm to obtain labeling information;
2.2) inputting the image library and the corresponding ROI region labeling information into a MobileNet _ SSD network for training, and when the training loss is converged, selecting a model with the highest ROI detection rate as an ROI detection deep learning model;
2.3) inputting the two infrared images pic1 and pic2 into the trained ROI detection deep learning model respectively, and outputting the ROI area position information detected in the two infrared images respectively.
Preferably, in the steps 3) and 4), the ROI image data enhancement is to perform denoising processing on the ROI image data through gaussian filtering, and then perform histogram equalization enhancement, normalization, scaling and rotation processing.
Preferably, the calculation formula of the composition structure of the modified residual error module is as follows:
Figure 132465DEST_PATH_IMAGE001
wherein the content of the first and second substances,H(x)is the output of the residual unit and,F(x) 1 F(x) 2 andF(x) 3 respectively representing the outputs of 3 convolution modules;xrepresents the input of the residual unit, i.e. an input to output identity mapping; a. b and c represent the weight of the output of the 3 convolution modules respectively.
Preferably, when the composition structure of the residual error module is modified, the number of residual error units is reduced from the original 16 to 6.
Preferably, the calculation formula of the modified residual error network includes:
Figure 950379DEST_PATH_IMAGE002
Figure 716210DEST_PATH_IMAGE003
Figure 215456DEST_PATH_IMAGE004
in the formula (I), the compound is shown in the specification,
Figure 484763DEST_PATH_IMAGE005
a loss function after the improvement is represented,
Figure 145683DEST_PATH_IMAGE006
a feature vector representing the ith sample,
Figure 805334DEST_PATH_IMAGE007
a label indicating a category to which the ith sample belongs, i indicates the ith sample,jdenotes the jth class, 0<i is not more than N, j =1 or 2, i, j are integers;Nrepresents the total number of training samples and the total number of training samples,sis the product of the network weight and the output modulus,
Figure 91959DEST_PATH_IMAGE008
namely, it is
Figure 966505DEST_PATH_IMAGE007
And i;Ais that
Figure 313173DEST_PATH_IMAGE008
Multiplying factor of (c); m is a parameter for decreasing the intra-class and increasing the inter-class distance;
Figure 69907DEST_PATH_IMAGE009
the penalty degree of the penalty item is represented,
Figure 160223DEST_PATH_IMAGE010
is shown as
Figure 889276DEST_PATH_IMAGE011
The center of the class is the center of the class,tthe number of iterations is indicated and,
Figure 78949DEST_PATH_IMAGE012
represents the learning rate of intra-class hub updates;
Figure 41089DEST_PATH_IMAGE013
representing the variation of the jth class after the t-th iteration.
Preferably, the training method of the palm vein recognition model based on the improved residual error network is as follows: performing data enhancement on a plurality of prepared palm vein ROI image data, inputting the data into an improved residual error network for training, and obtaining a trained palm vein recognition model based on the improved residual error network after the network converges; in the steps 3) and 4), the enhanced ROI image of the infrared image pic1 and the enhanced ROI image of the infrared image pic2 are respectively input into a trained palm vein recognition model based on an improved residual error network, so that an output vector of the full connection layer can be obtained, and the output vector is the extracted feature vector.
Preferably, the formula for calculating the distance between the feature vector and the enrollment template in step 5) is:
Figure 420249DEST_PATH_IMAGE014
where diff is the distance between the feature vector and the registered template, features1 is the feature vector, tmpl1 is the registered template corresponding to the feature vector features1 in the template library, i represents the ith dimension of the feature vector and the registered template, i is an integer and 0<i<=512;nRepresenting the total dimensions of the feature vector and the enrollment template.
Compared with the prior art, the technical scheme provided by the invention has the following beneficial effects:
1. the invention provides an improved residual error network structure, which can extract the texture features of input samples with different scales during training, so that the trained model has the capability of extracting multi-scale texture information, the extracted feature vectors better express the input sample information, and the problems of various zooming and translation of the input samples can be solved to a certain extent;
2. the invention provides an improved loss function based on arcface, which increases a constraint term for an intra-class center to make the intra-class distance smaller and increase a multiplication coefficient of an angle, adjusts and controls the size of the angle according to the input of a residual error unit and the inter-class distance, makes the same type of samples more similar and different types of samples more different during feature extraction, and can solve the problem of rotation of the input same type of samples to a certain extent;
3. the invention relates to a non-contact palm vein recognition method based on an improved residual error network, which adopts an improved residual error module structure and an improved residual error network loss function simultaneously based on the residual error network, and a trained model is used for non-contact palm vein recognition, has higher recognition rate and robustness, and can adapt to multi-pose palm vein verification.
Drawings
Fig. 1 is a flow chart of a non-contact palm vein identification method based on an improved residual error network;
FIG. 2 is a left and right palm infrared image acquired by the present invention;
FIG. 3 is a flowchart of ROI detection deep learning model training and detection;
FIG. 4 is a palm vein ROI image after data enhancement according to the present invention;
FIG. 5 is a schematic diagram of an improved residual convolutional network in the method of the present invention;
FIG. 6 shows a modified residual error unit in the method of the present invention;
fig. 7 is a flow chart of training and recognition of a palm vein recognition model based on an improved residual error network.
Detailed Description
For further understanding of the present invention, the present invention will be described in detail with reference to examples, which are provided for illustration of the present invention but are not intended to limit the scope of the present invention.
Example 1
Referring to fig. 1, the embodiment relates to a non-contact palm vein identification method based on an improved residual error network, which includes the following steps:
1) two infrared images of the same palm, denoted pic1 and pic2, were taken using a non-contact device, as shown in fig. 2, and the image sizes were both: 1280 pixels 720 pixels;
2) locating the ROI area of the palm: inputting the two infrared images pic1 and pic2 into a trained ROI detection deep learning model (recorded as model 1) respectively to obtain corresponding ROI area position information, cutting out ROI images of the two infrared images according to the ROI area position information of the two infrared images pic1 and pic2, and referring to the attached drawing 3, the method specifically comprises the following steps:
2.1) acquiring 5000 palm 10 images by using a non-contact device, manually marking the ROI region of an image library containing the palm by using a labelImage tool, and obtaining marking information and an xml file of corresponding 5000 x 10 position information;
2.2) inputting an image library containing 5000 palm x 10 images and corresponding ROI (region of interest) region marking information (corresponding xml files) into a MobileNet _ SSD network for training, setting reasonable training super parameters, training based on a TensorFlow frame, and selecting a model with the highest ROI detection rate as an ROI detection deep learning model when training loss is converged;
2.3) inputting the two infrared images pic1 and pic2 into the trained ROI detection deep learning model respectively, and outputting the ROI area position information detected in the two infrared images respectively.
3) Palm vein registration: performing data enhancement on an ROI image of the infrared image pic1, wherein the ROI image data enhancement is to perform denoising processing on ROI image data through Gaussian filtering, and then perform histogram equalization enhancement, normalization, scaling and rotation processing, and the ROI image before and after processing is shown in the attached figure 4, wherein (a) is a palm ROI original image; (b) the graph is (a) subjected to histogram equalization and size normalization; (c) the graph after (b) scaling and center rotation. And inputting the ROI image of the processed infrared image pic1 into a trained improved residual error network palm vein recognition model for feature extraction to obtain a feature vector, taking the feature vector as a registration template tmpl1, and storing the registration template tmpl1 into a template library.
4) Palm vein verification: and performing data enhancement on the ROI image of the infrared image pic2, namely performing denoising processing on the ROI image data through Gaussian filtering, and inputting the ROI image data into a trained improved residual error network palm vein recognition model (recorded as model 2) for feature extraction by using histogram equalization enhancement, normalization, scaling and rotation processing to obtain feature vectors, namely the features 1.
In the above steps 3) and 4), the improved residual error network palm vein recognition model refers to a palm vein recognition model for modifying the composition structure of the residual error module and modifying the loss function of the residual error network. The constitution structure of the residual error modification module is that convolution mapping of 2 different scales is added on the basis of a ResNet50 network structure, the structure of the improved residual error convolution network is shown in figure 5, namely, extraction of palm vein textures and edge information of different scales is added, the number of residual error units is reduced from 16 to 6, and the constitution structure of the improved residual error module is obtained; the loss function of the modified residual error network is that on the basis of the ArcFace loss function, a constraint item for the center in the class is added, so that the distance in the class is smaller, the multiplying coefficient of the angle is increased, and the size of the angle is regulated according to the input of the residual error unit and the distance between the classes, so that the improved loss function is obtained.
The calculation formula for modifying the composition structure of the residual error module is as follows:
Figure 253075DEST_PATH_IMAGE015
wherein the content of the first and second substances,H(x)is the output of the residual unit and,F(x) 1 F(x) 2 andF(x) 3 respectively representing the outputs of 3 convolution modules;xrepresents the input of the residual unit, i.e. an input to output identity mapping; a. b and c represent the weights of the outputs of the 3 convolution modules, respectively, in this embodiment, a =1, b =1, and c =1, the sizes of the convolution kernels used are 3 × 3, 5 × 5, and 7 × 7, respectively, and a specific residual structure is shown in fig. 6.
The calculation formula of the modified residual error network comprises:
Figure 489016DEST_PATH_IMAGE002
Figure 407293DEST_PATH_IMAGE003
Figure 590144DEST_PATH_IMAGE004
in the formula (I), the compound is shown in the specification,
Figure 808636DEST_PATH_IMAGE005
a loss function after the improvement is represented,
Figure 684319DEST_PATH_IMAGE006
a feature vector representing the ith sample,
Figure 89892DEST_PATH_IMAGE007
a label indicating a category to which the ith sample belongs, i indicates the ith sample,jdenotes the jth class, 0<i is not more than N, j =1 or 2, i, j are integers;Nrepresents the total number of training samples and the total number of training samples,sis the product of the network weight and the output modulus,
Figure 732226DEST_PATH_IMAGE008
namely, it is
Figure 555957DEST_PATH_IMAGE007
And i;Ais that
Figure 320651DEST_PATH_IMAGE008
Multiplying factor of (c); m is a parameter for decreasing the intra-class and increasing the inter-class distance;
Figure 26570DEST_PATH_IMAGE009
the penalty degree of the penalty item is represented,
Figure 800491DEST_PATH_IMAGE010
is shown as
Figure 744307DEST_PATH_IMAGE011
The center of the class is the center of the class,tthe number of iterations is indicated and,
Figure 679902DEST_PATH_IMAGE012
represents the learning rate of intra-class hub updates;
Figure 732172DEST_PATH_IMAGE013
representing the variation of the jth class after the t-th iteration.
N =6000, m =0.3,
Figure 794937DEST_PATH_IMAGE016
Figure 842527DEST_PATH_IMAGE017
the training method of the palm vein recognition model based on the improved residual error network comprises the following steps: and performing data enhancement on a plurality of prepared palm vein ROI image data, inputting the data into an improved residual error network for training, and obtaining a trained palm vein recognition model based on the improved residual error network after the network converges.
The specific training method of the improved residual error network palm vein recognition model is shown in fig. 7, and comprises the following steps:
1. acquiring 6000 palm x 20 infrared palm images by using a non-contact device;
2. inputting the 6000 palm × 20 frames into an ROI detection deep learning model (recorded as model 1) respectively for detection, and cutting according to the output position information to obtain palm vein ROI images;
3. the ROI image is subjected to data enhancement, and the data enhancement specifically comprises the following steps: gaussian denoising, histogram equalization enhancement, mean normalization, scale normalization, random scaling of plus or minus 10 pixels, and random rotation within the angle range of [ -45 °, +45 ° ]; after enhancement, 6000 palm x 100 palm vein ROI image data are obtained;
4. inputting 6000 palm 100 images into the improved residual error identification network, setting reasonable training hyper-parameters, and then training based on a Pythrch frame;
5. and (5) until the training loss converges, selecting a model with the highest recognition rate, namely obtaining a trained improved residual error network palm vein recognition model (recorded as model 2) with palm vein recognition capability.
And respectively inputting the enhanced ROI image of the infrared image pic1 and the ROI image of the infrared image pic2 into a trained palm vein recognition model based on an improved residual error network, so that an output vector of the full-connection layer can be obtained, wherein the output vector is the extracted feature vector.
5) And (3) palm vein verification and judgment: setting an identification threshold T, calculating the distance between the characteristic vector and the registration template, if the distance is less than the set identification threshold T, determining that the palm vein authentication is successful, otherwise, failing to authenticate; the formula for calculating the distance between the feature vector and the enrollment template is:
Figure 762073DEST_PATH_IMAGE014
where diff is the distance between the feature vector and the registered template, features1 is the feature vector, tmpl1 is the registered template corresponding to the feature vector features1 in the template library, i represents the ith dimension of the feature vector and the registered template, i is an integer and 0<i<=512;nRepresenting the total dimensions of the feature vector and the enrollment template.
T =0.89 in this example, and diff =0.55 for pic1 and pic 2; it can be seen that diff < T, which means that the palm vein is successfully verified, is the same palm vein.
Comparative example 1
A palm vein recognition model (denoted model 3) was trained using a ResNet50 network based on a residual error network with ArcFace as a loss function using the same palm vein training data, environment and data enhancement as in the example.
Examples of the experiments
Experiment 1: in the experiment, non-contact equipment is used, 1000 infrared normal palm vein images of people are collected within the range of [90, 120] mm from a camera, each person respectively collects 10 infrared palm images of a left palm and a right palm in normal postures, and identification verification is respectively carried out by applying the palm vein identification method related to embodiment 1 and applying the palm vein identification model3 related to comparative example 1; the verification method comprises the following steps: and registering each type of palm by using the first infrared palm vein image, respectively verifying the palm veins of the rest 9 images, and counting the passing rate. The specific statistical results are shown in table 1.
Experiment 2: based on the left and right palm registration templates of 1000 persons registered in experiment 1, in the range of [90, 120] mm from the camera, the infrared palm images randomly rotating within the angle range of [ -45 °, +45 ° ] are collected, 9 images per palm are respectively identified and verified by applying the palm vein identification method according to embodiment 1 and the palm vein identification model3 according to comparative example 1, and the statistical results are shown in table 1.
Table 1: palm vein verification passing rate statistical table
Figure 629535DEST_PATH_IMAGE019
And (4) experimental conclusion: experiment 1 shows that, compared with an unmodified residual identification network model, the palm vein identification method provided by the application has a higher verification success rate for the palm vein image under the non-contact equipment. The result of experiment 2 shows that the palm vein identification method provided by the application has a better passing rate for the verification of the rotating palm vein. Comparing the results of experiment 1 and experiment 2, it can be seen that the identification method provided by the application can still maintain a high passing rate for multi-pose palm vein verification. In a word, it can be seen that the non-contact palm vein identification method based on the improved residual error network has high identification rate and robustness for normal palm vein verification and multi-posture verification.
The present invention has been described in detail with reference to the embodiments, but the description is only for the preferred embodiments of the present invention and should not be construed as limiting the scope of the present invention. All equivalent changes and modifications made within the scope of the present invention shall fall within the scope of the present invention.

Claims (8)

1. A non-contact palm vein identification method based on an improved residual error network is characterized in that: which comprises the following steps:
1) acquiring two infrared images of the same palm by using non-contact equipment, and recording the two infrared images as pic1 and pic 2;
2) locating the ROI area of the palm: respectively inputting the two infrared images pic1 and pic2 into a trained ROI detection deep learning model to respectively obtain corresponding ROI regional position information, and cutting out ROI images of the two infrared images according to the ROI regional position information of the two infrared images pic1 and pic 2;
3) palm vein registration: performing data enhancement on an ROI (region of interest) image of the infrared image pic1, inputting the data into a trained improved residual error network palm vein recognition model for feature extraction to obtain a feature vector, taking the feature vector as a registration template, and storing the feature vector into a template library;
4) palm vein verification: performing data enhancement on an ROI (region of interest) image of the infrared image pic2, inputting the data into a trained improved residual error network palm vein recognition model for feature extraction, and obtaining feature vectors;
5) and (3) palm vein verification and judgment: setting an identification threshold T, calculating the distance between the characteristic vector and the registration template, if the distance is less than the set identification threshold T, determining that the palm vein authentication is successful, otherwise, failing to authenticate;
in the step 3) and the step 4), the improved residual error network palm vein recognition model refers to a palm vein recognition model for modifying the composition structure of a residual error module and modifying the loss function of a residual error network;
the composition structure of the residual error modification module is that convolution mapping of 2 different scales is added on the basis of a ResNet50 network structure, namely, palm vein texture and edge information extraction of different scales are added, and the number of residual error units is reduced at the same time, so that the composition structure of the improved residual error module is obtained;
the loss function of the modified residual error network is that on the basis of the ArcFace loss function, a constraint item for the center in the class is added, so that the distance in the class is smaller, the multiplying coefficient of the angle is increased, and the size of the angle is regulated according to the input of the residual error unit and the distance between the classes, so that the improved loss function is obtained.
2. The non-contact palm vein recognition method based on the improved residual error network as claimed in claim 1, wherein: the specific step of positioning the ROI region of the palm in step 2) includes:
2.1) manually labeling the ROI of the image library containing the palm to obtain labeling information;
2.2) inputting the image library and the corresponding ROI region labeling information into a MobileNet _ SSD network for training, and when the training loss is converged, selecting a model with the highest ROI detection rate as an ROI detection deep learning model;
2.3) inputting the two infrared images pic1 and pic2 into the trained ROI detection deep learning model respectively, and outputting the ROI area position information detected in the two infrared images respectively.
3. The non-contact palm vein recognition method based on the improved residual error network as claimed in claim 1, wherein: in the steps 3) and 4), the ROI image data enhancement is to perform denoising processing on the ROI image data through Gaussian filtering, and then use histogram equalization enhancement, normalization, scaling and rotation processing.
4. The non-contact palm vein recognition method based on the improved residual error network as claimed in claim 1, wherein: the calculation formula of the composition structure of the modified residual error module is as follows:
Figure 959259DEST_PATH_IMAGE001
wherein the content of the first and second substances,H(x)is the output of the residual unit and,F(x) 1 F(x) 2 andF(x) 3 respectively representing the outputs of 3 convolution modules;xrepresenting the input of the residual unit, i.e. an identity mapping of input to outputShooting; a. b and c represent the weight of the output of the 3 convolution modules respectively.
5. The non-contact palm vein recognition method based on the improved residual error network of claim 4, wherein: when the composition structure of the residual error module is modified, the number of residual error units is reduced from 16 to 6.
6. The non-contact palm vein recognition method based on the improved residual error network as claimed in claim 1, wherein: the calculation formula of the modified residual error network comprises:
Figure 648998DEST_PATH_IMAGE002
Figure 550089DEST_PATH_IMAGE003
Figure 879439DEST_PATH_IMAGE004
in the formula (I), the compound is shown in the specification,
Figure 891388DEST_PATH_IMAGE005
a loss function after the improvement is represented,
Figure 29109DEST_PATH_IMAGE006
a feature vector representing the ith sample,
Figure 350369DEST_PATH_IMAGE007
a label indicating a category to which the ith sample belongs, i indicates the ith sample,jdenotes the jth class, 0<i is not more than N, j =1 or 2, i, j are integers;Nrepresents the total number of training samples and the total number of training samples,sis the product of the network weight and the output modulus,
Figure 511223DEST_PATH_IMAGE008
namely, it is
Figure 530125DEST_PATH_IMAGE007
And i;Ais that
Figure 460035DEST_PATH_IMAGE008
Multiplying factor of (c); m is a parameter for decreasing the intra-class and increasing the inter-class distance;
Figure 421038DEST_PATH_IMAGE009
the penalty degree of the penalty item is represented,
Figure 514592DEST_PATH_IMAGE010
is shown as
Figure 789716DEST_PATH_IMAGE011
The center of the class is the center of the class,tthe number of iterations is indicated and,
Figure 980657DEST_PATH_IMAGE012
represents the learning rate of intra-class hub updates;
Figure 378140DEST_PATH_IMAGE013
representing the variation of the jth class after the t-th iteration.
7. The non-contact palm vein recognition method based on the improved residual error network as claimed in claim 1, wherein: the training method of the palm vein recognition model based on the improved residual error network comprises the following steps: performing data enhancement on a plurality of prepared palm vein ROI image data, inputting the data into an improved residual error network for training, and obtaining a trained palm vein recognition model based on the improved residual error network after the network converges; in the steps 3) and 4), the enhanced ROI image of the infrared image pic1 and the enhanced ROI image of the infrared image pic2 are respectively input into a trained palm vein recognition model based on an improved residual error network, so that an output vector of the full connection layer can be obtained, and the output vector is the extracted feature vector.
8. The non-contact palm vein recognition method based on the improved residual error network as claimed in claim 1, wherein: the formula for calculating the distance between the feature vector and the registration template in the step 5) is as follows:
Figure 716849DEST_PATH_IMAGE014
where diff is the distance between the feature vector and the registered template, features1 is the feature vector, tmpl1 is the registered template corresponding to the feature vector features1 in the template library, i represents the ith dimension of the feature vector and the registered template, i is an integer and 0<i<=512;nRepresenting the total dimensions of the feature vector and the enrollment template.
CN202011379940.8A 2020-12-01 2020-12-01 Non-contact palm vein identification method based on improved residual error network Active CN112200159B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011379940.8A CN112200159B (en) 2020-12-01 2020-12-01 Non-contact palm vein identification method based on improved residual error network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011379940.8A CN112200159B (en) 2020-12-01 2020-12-01 Non-contact palm vein identification method based on improved residual error network

Publications (2)

Publication Number Publication Date
CN112200159A true CN112200159A (en) 2021-01-08
CN112200159B CN112200159B (en) 2021-02-19

Family

ID=74034364

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011379940.8A Active CN112200159B (en) 2020-12-01 2020-12-01 Non-contact palm vein identification method based on improved residual error network

Country Status (1)

Country Link
CN (1) CN112200159B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801066A (en) * 2021-04-12 2021-05-14 北京圣点云信息技术有限公司 Identity recognition method and device based on multi-posture facial veins
CN113673343A (en) * 2021-07-19 2021-11-19 西安交通大学 Open set palm print recognition system and method based on weighted element metric learning
CN114120381A (en) * 2021-11-29 2022-03-01 广州新科佳都科技有限公司 Palm vein feature extraction method and device, electronic device and medium
CN114140424A (en) * 2021-11-29 2022-03-04 佳都科技集团股份有限公司 Palm vein data enhancement method and device, electronic equipment and medium
CN116363712A (en) * 2023-03-21 2023-06-30 中国矿业大学 Palmprint palm vein recognition method based on modal informativity evaluation strategy

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389539A (en) * 2015-10-15 2016-03-09 电子科技大学 Three-dimensional gesture estimation method and three-dimensional gesture estimation system based on depth data
CN107145829A (en) * 2017-04-07 2017-09-08 电子科技大学 A kind of vena metacarpea recognition methods for merging textural characteristics and scale invariant feature
CN108875705A (en) * 2018-07-12 2018-11-23 广州麦仑信息科技有限公司 A kind of vena metacarpea feature extracting method based on Capsule
CN108985231A (en) * 2018-07-12 2018-12-11 广州麦仑信息科技有限公司 A kind of vena metacarpea feature extracting method based on multiple dimensioned convolution kernel
CN109241995A (en) * 2018-08-01 2019-01-18 中国计量大学 A kind of image-recognizing method based on modified ArcFace loss function
CN110147732A (en) * 2019-04-16 2019-08-20 平安科技(深圳)有限公司 Refer to vein identification method, device, computer equipment and storage medium
CN110197099A (en) * 2018-02-26 2019-09-03 腾讯科技(深圳)有限公司 The method and apparatus of across age recognition of face and its model training
CN111274924A (en) * 2020-01-17 2020-06-12 厦门中控智慧信息技术有限公司 Palm vein detection model modeling method, palm vein detection method and palm vein detection device
CN111325687A (en) * 2020-02-14 2020-06-23 上海工程技术大学 Smooth filtering evidence obtaining method based on end-to-end deep network
CN111400535A (en) * 2020-03-11 2020-07-10 广东宜教通教育有限公司 Lightweight face recognition method, system, computer device and storage medium
CN111639557A (en) * 2020-05-15 2020-09-08 圣点世纪科技股份有限公司 Intelligent registration feedback method for finger vein image
CN111639558A (en) * 2020-05-15 2020-09-08 圣点世纪科技股份有限公司 Finger vein identity verification method based on ArcFace Loss and improved residual error network
US20200302184A1 (en) * 2019-03-21 2020-09-24 Samsung Electronics Co., Ltd. Electronic device and controlling method thereof
CN111931758A (en) * 2020-10-19 2020-11-13 北京圣点云信息技术有限公司 Face recognition method and device combining facial veins

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389539A (en) * 2015-10-15 2016-03-09 电子科技大学 Three-dimensional gesture estimation method and three-dimensional gesture estimation system based on depth data
CN107145829A (en) * 2017-04-07 2017-09-08 电子科技大学 A kind of vena metacarpea recognition methods for merging textural characteristics and scale invariant feature
CN110197099A (en) * 2018-02-26 2019-09-03 腾讯科技(深圳)有限公司 The method and apparatus of across age recognition of face and its model training
CN108985231A (en) * 2018-07-12 2018-12-11 广州麦仑信息科技有限公司 A kind of vena metacarpea feature extracting method based on multiple dimensioned convolution kernel
CN108875705A (en) * 2018-07-12 2018-11-23 广州麦仑信息科技有限公司 A kind of vena metacarpea feature extracting method based on Capsule
CN109241995A (en) * 2018-08-01 2019-01-18 中国计量大学 A kind of image-recognizing method based on modified ArcFace loss function
US20200302184A1 (en) * 2019-03-21 2020-09-24 Samsung Electronics Co., Ltd. Electronic device and controlling method thereof
CN110147732A (en) * 2019-04-16 2019-08-20 平安科技(深圳)有限公司 Refer to vein identification method, device, computer equipment and storage medium
CN111274924A (en) * 2020-01-17 2020-06-12 厦门中控智慧信息技术有限公司 Palm vein detection model modeling method, palm vein detection method and palm vein detection device
CN111325687A (en) * 2020-02-14 2020-06-23 上海工程技术大学 Smooth filtering evidence obtaining method based on end-to-end deep network
CN111400535A (en) * 2020-03-11 2020-07-10 广东宜教通教育有限公司 Lightweight face recognition method, system, computer device and storage medium
CN111639557A (en) * 2020-05-15 2020-09-08 圣点世纪科技股份有限公司 Intelligent registration feedback method for finger vein image
CN111639558A (en) * 2020-05-15 2020-09-08 圣点世纪科技股份有限公司 Finger vein identity verification method based on ArcFace Loss and improved residual error network
CN111931758A (en) * 2020-10-19 2020-11-13 北京圣点云信息技术有限公司 Face recognition method and device combining facial veins

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CHEN ZHENZHOU 等: "Face recognition based on improved residual neural network", 《2019 CHINESE CONTROL AND DECISION CONFERENCE》 *
JIANKANG DENG 等: "ArcFace: Additive Angular Margin Loss for Deep Face Recognition", 《2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
MINGXING TAN: "MixConv: Mixed Depthwise Convolutional Kernels", 《ARXIN.ORG》 *
ZHONGKAI MA 等: "A Lightweight Real-Time Semantic Segmentation Network for Equipment Images in Space Capsule", 《2020 INTERNATIONAL WORKSHOP ON ELECTRONIC COMMUNICATION AND ARTIFICIAL INTELLIGENCE》 *
张娜 等: "基于深度残差网络与离散哈希的指静脉识别方法", 《浙江理工大学学报(自然科学版)》 *
覃勇杰: "基于机载系统的特定目标检测技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801066A (en) * 2021-04-12 2021-05-14 北京圣点云信息技术有限公司 Identity recognition method and device based on multi-posture facial veins
CN112801066B (en) * 2021-04-12 2022-05-17 北京圣点云信息技术有限公司 Identity recognition method and device based on multi-posture facial veins
CN113673343A (en) * 2021-07-19 2021-11-19 西安交通大学 Open set palm print recognition system and method based on weighted element metric learning
CN113673343B (en) * 2021-07-19 2023-09-19 西安交通大学 Open set palmprint recognition system and method based on weighting element measurement learning
CN114120381A (en) * 2021-11-29 2022-03-01 广州新科佳都科技有限公司 Palm vein feature extraction method and device, electronic device and medium
CN114140424A (en) * 2021-11-29 2022-03-04 佳都科技集团股份有限公司 Palm vein data enhancement method and device, electronic equipment and medium
CN116363712A (en) * 2023-03-21 2023-06-30 中国矿业大学 Palmprint palm vein recognition method based on modal informativity evaluation strategy
CN116363712B (en) * 2023-03-21 2023-10-31 中国矿业大学 Palmprint palm vein recognition method based on modal informativity evaluation strategy

Also Published As

Publication number Publication date
CN112200159B (en) 2021-02-19

Similar Documents

Publication Publication Date Title
CN112200159B (en) Non-contact palm vein identification method based on improved residual error network
US7596247B2 (en) Method and apparatus for object recognition using probability models
Ahmad et al. Non-stationary feature fusion of face and palmprint multimodal biometrics
Michael et al. A contactless biometric system using palm print and palm vein features
CN107729820B (en) Finger vein identification method based on multi-scale HOG
Ong et al. A single-sensor hand geometry and palmprint verification system
Vorugunti et al. Osvnet: Convolutional siamese network for writer independent online signature verification
US20200265211A1 (en) Fingerprint distortion rectification using deep convolutional neural networks
Dong et al. Finger vein verification based on a personalized best patches map
CN111325275B (en) Robust image classification method and device based on low-rank two-dimensional local identification map embedding
Yılmaz Offline signature verification with user-based and global classifiers of local features
CN111639562A (en) Intelligent positioning method for palm region of interest
Raghavendra et al. Robust palmprint verification using sparse representation of binarized statistical features: A comprehensive study
CN112183504B (en) Video registration method and device based on non-contact palm vein image
Doroz et al. Multidimensional nearest neighbors classification based system for incomplete lip print identification
Kour et al. Palmprint recognition system
Liu et al. Palm-dorsa vein recognition based on independent principle component analysis
CN113705647B (en) Dual semantic feature extraction method based on dynamic interval
Huang et al. Writer age estimation through handwriting
Angadi et al. Iris recognition: a symbolic data modeling approach using Savitzky-Golay filter energy features
George Development of efficient biometric recognition algorithms based on fingerprint and face
CN112270287A (en) Palm vein identification method based on rotation invariance
Fratric et al. Real-time model-based hand localization for unsupervised palmar image acquisition
Poonia et al. Palm-print recognition based on image quality and texture features with neural network
Marcialis et al. Large scale experiments on fingerprint liveness detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210907

Address after: 030032 room 0906, floor 9, building C, qingkong innovation base, No. 529, South Central Street, Taiyuan Xuefu Park, comprehensive reform demonstration zone, Taiyuan, Shanxi Province

Patentee after: Holy Point Century Technology Co.,Ltd.

Address before: 9 / F, unit 1, building 2, no.41-5, Jinsha North 2nd Road, Jinniu District, Chengdu, Sichuan 610000

Patentee before: Sichuan ShengDian Century Technology Co.,Ltd.

TR01 Transfer of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A non-contact palmar vein recognition method based on improved residual network

Effective date of registration: 20220607

Granted publication date: 20210219

Pledgee: Sub branch directly under Shanxi Branch of China Postal Savings Bank Co.,Ltd.

Pledgor: Holy Point Century Technology Co.,Ltd.

Registration number: Y2022140000022

PC01 Cancellation of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20230621

Granted publication date: 20210219

Pledgee: Sub branch directly under Shanxi Branch of China Postal Savings Bank Co.,Ltd.

Pledgor: Holy Point Century Technology Co.,Ltd.

Registration number: Y2022140000022

PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A non-contact palmar vein recognition method based on improved residual network

Effective date of registration: 20230626

Granted publication date: 20210219

Pledgee: Sub branch directly under Shanxi Branch of China Postal Savings Bank Co.,Ltd.

Pledgor: Holy Point Century Technology Co.,Ltd.

Registration number: Y2023980045525