CN115797987A - Finger vein identification method based on joint loss and convolutional neural network - Google Patents
Finger vein identification method based on joint loss and convolutional neural network Download PDFInfo
- Publication number
- CN115797987A CN115797987A CN202211053355.8A CN202211053355A CN115797987A CN 115797987 A CN115797987 A CN 115797987A CN 202211053355 A CN202211053355 A CN 202211053355A CN 115797987 A CN115797987 A CN 115797987A
- Authority
- CN
- China
- Prior art keywords
- finger
- finger vein
- image
- network
- sample
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention discloses a finger vein identification method based on joint loss and a convolutional neural network, which comprises the following steps: collecting a finger vein image, preprocessing the image, extracting an interested area of the finger vein image, and normalizing the size of the image; constructing a deep convolution neural network for finger vein recognition, and constructing a feature extraction network based on a ResNet34 structure; model training, namely training a deep convolution neural network by combining loss functions of an Euclidean measurement space and a cosine measurement space, so that a network model is made to learn effective characteristic representation of the finger vein; and determining an identification threshold, and finding the optimal identification threshold by an enumeration method by taking the error acceptance rate and the error identification rate as the measurement standard to realize the identification of the finger vein image. The method is used for extracting and matching the finger vein image characteristics, and can effectively identify the finger vein identity.
Description
Technical Field
The invention relates to a computer vision and biological characteristic identification technology, in particular to a method suitable for finger vein identity authentication.
Background
With the development of information technology, the demand of people for information security is gradually increasing, and therefore, biometric identification is widely applied. The biometric identification technology is to perform identity authentication by using human biometric features or behavior features, and the following are commonly used: fingerprint identification, face identification, iris identification and the like. The fingerprint and the human face are external features of organisms, the use is convenient, the identification speed is high, but the risks of being stolen and imitated exist. The iris is a biological internal feature, so that the security is higher, but the user experience is poor during identity authentication. The finger vein is an internal feature of organisms, has the advantages of uniqueness, non-contact acquisition, high safety and the like, and has great research value.
The finger vein recognition firstly collects vein images of fingers through a sensor, preprocesses the vein images, extracts the characteristics of the finger veins, compares the characteristics with vein characteristic templates in a database, and completes identity authentication. The existing finger vein recognition methods can be roughly divided into two types, one type is a traditional vein recognition method, and the other type is a recognition method based on deep learning.
The traditional vein identification method is based on the traditional image processing technology, the vein lines, the geometric topological structure, the local binary codes and the like of the finger veins are used as the characteristics for measuring the difference of the finger veins, the quantifiable attributes of the characteristics are extracted by using a preset model and used as a template, and the difference between a sample to be detected and the template is compared in the identification stage. The method depends on the manual design of experts, the generalization capability of the algorithm is limited, and the identification performance is greatly reduced under the conditions of finger translation, finger rotation and the like.
Based on the deep learning method, the artificial neural network is constructed, the feature representation of the finger veins is learned from a large amount of data, and compared with the traditional method, the method has better generalization capability and higher identification accuracy of the algorithm. However, the method generally has the problems of small training sample amount, improper loss function design and the like, so that the application of deep learning in the field of finger vein recognition is limited.
Disclosure of Invention
Aiming at the defects that the expression capability in the finger vein feature extraction in the prior art is limited and the application is limited, the finger vein identification method which enables the network to have stronger feature representation capability is provided.
The technical scheme adopted by the invention to solve the technical problems is that,
in summary, due to the adoption of the technical scheme, the invention has the beneficial effects that:
the invention provides a finger vein identification method based on joint loss and a convolutional neural network. Based on the deep convolutional neural network, the finger vein recognition network can express deeper features, the sample feature classes are compact and dispersed among the classes through the combined loss function of the Euclidean metric space and the cosine metric space, the network has stronger feature expression capability, and meanwhile, the difficulty of small training data amount of the deep learning method is solved based on the training method of sample pairing, so that the recognition effect is improved.
Drawings
FIG. 1 is a schematic diagram of a data set partitioning protocol of the present invention;
FIG. 2 is a schematic diagram of the training process of the present invention;
FIG. 3 is a schematic representation of the combined losses of the present invention;
Detailed Description
Step 1, collecting a finger vein image, preprocessing the finger vein image to obtain a finger vein sample, labeling a classification label, and establishing a finger vein data set;
step 2, constructing and training a deep convolution neural network for finger vein recognition:
the deep convolutional neural network comprises a feature extraction network and a classification network; the characteristic extraction network is used for extracting vein characteristic vectors from the finger vein samples and inputting the vein characteristic vectors into the classification network; the classification network is used for receiving the vein feature vector and outputting a finger vein recognition result;
the feature extraction network is based on a joint loss optimization network during training; joint loss function L = L 1 +L 2 Wherein L is 1 To constrain the feature representation of the sample from Euclidean space, L 2 The feature representation of the constrained sample on the cosine space:
wherein alpha and beta are regulatory factors, respectively for the regulation of L 1 And L 2 The loss prevents the two from being too different, | · non-woven counting 2 2-norm representing vector, f (-) representing feature extraction function, x i (i e N) represents the ith input finger vein image, N represents the number of samples of one batch,respectively representing an anchor point sample a of an ith input finger vein image, a positive sample p which is the same as the anchor point sample and a negative sample n which is different from the anchor point sample, wherein beta represents the increasing interval of the inter-class distance and the intra-class distance; l is 2 In (1),representing the ith sample feature and network weightThe angle between the vectors, due to | | | x i || 2 Andthe feature vectorization is performed, the result is 1, simplification can be achieved, s represents a scaling factor, m represents an increasing interval between the weight and the feature, θ j Represents the weight w yj And the jth sample (i ≠ j) vector. The joint loss function enables the network to learn the characteristic representation of the finger vein from the Euclidean measurement space and the cosine measurement space, and the data scale of the original sample number is expanded from N to N through the training mode of sample pairing
In the training stage, the classification network updates the weights of the feature extraction network and the classification network by predicting the class of the sample and by means of the error between the predicted value and the true value and by means of a back propagation algorithm;
step 3, calculating an identification threshold value:
after the deep convolutional neural network training is finished, removing the classification network and reserving the feature extraction network; and extracting features on the finger vein data set by using a feature extraction network to represent, measuring the distance between the extracted features and the corresponding vein sample features in the data set by using cosine similarity, and comparing the distance with a given threshold value to judge whether the two features are similar. And determining the value of the identification threshold according to the similarity judgment result. Specifically, the false rejection rate FRR and the false acceptance rate FAR can be obtained through the similarity judgment result to evaluate the feature extraction network effect and determine the value of the identification threshold.
Step 4, extracting a network output prediction result of the image to be distinguished based on the finger vein characteristics:
for a registered finger vein image, acquiring feature representation of the registered finger vein based on a finger vein feature extraction network, and storing the feature representation as a registration template in a feature database;
and acquiring the feature representation of the finger vein to be judged through the feature extraction network for the inquired finger vein image to be judged, calculating the cosine similarity between the feature representation and a template in the feature database, comparing the cosine similarity with an identification threshold, judging whether the finger vein is registered or not, and finishing identity identification.
Specifically, in step 1, the image preprocessing includes region of interest ROI extraction and image size normalization of the finger vein image.
Further, the image ROI extraction is to adopt a sobel operator to obtain a finger boundary, fit a central axis of a finger based on a least square method, rotationally correct the finger, position a knuckle based on a finger joint cavity, and intercept an image interesting region of the knuckle.
Specifically, in step 1, the finger vein data set is divided into a training set, a verification set and a test set based on an open set protocol, and the test set is divided into a registration set and a query set.
Further, in step 2, the improvement point of the deep convolutional neural network for finger vein recognition based on the existing deep convolutional neural network is as follows: the feature extraction network adopts a ResNet34 structure, replaces a convolution kernel of 7 multiplied by 7 with a small convolution kernel of 3 multiplied by 3, reserves more local detail information and reduces the network parameter number at the same time; meanwhile, before the BatchNorm in the original residual error module is adjusted to the convolution operation, the stable distribution of the input can be ensured, and the network can learn from the original data distribution; replacing the ReLU activation function with a Leaky ReLu function to avoid mean shift; at the end of the residual error module, a channel attention module is introduced to allow the network to pay attention to more important information; and performing L2 regularization on the final output characteristics of the network, so that the cosine similarity can be calculated conveniently.
The present invention will be described in further detail below with reference to embodiments and the accompanying drawings.
Firstly, step 1 preprocesses the acquired image, specifically as follows:
s1-1, inputting a finger vein image, carrying out convolution operation on the image by using a sobel operator, and coarsely filtering background information of the image to obtain gradients in an X direction and a Y direction, wherein the gradients are calculated as follows:
wherein A represents an original image, G x Indicating the gradient of the vein image in the X-direction, G y Diagram showing finger veinLike a gradient in the Y direction.
And S1-2, because other noises are distributed in the image after the background information of the image is coarsely filtered, calculating the communication number of non-zero pixels in the image through parallel searching, and setting the communication blocks with the communication number less than 5 as zero to obtain a binary image of the finger boundary.
S1-3, according to the corresponding x-coordinate of the finger on the abscissa i Coordinates of upper and lower boundaries on a binary imageAndcalculating the coordinates of the central axis of the finger(where (i ∈ [0, W-1 ]]W is the image width) and fits the finger medial axis based on least squares.
S1-4, calculating a slope according to the fitted central axis of the finger, wherein an angle corresponding to the slope is a finger rotation angle, and performing rotation correction on the finger vein image by using a rotation angle theta to obtain a corrected image G', wherein the rotation angle is calculated as follows:
wherein, y 1 、y 2 Is the ordinate, x, of any two points on the axis of the finger 1 、x 2 Is the corresponding abscissa.
And S1-5, determining the ROI boundary of the finger vein image. Because the finger knuckle cavities are presented as areas with higher brightness in the finger vein image, two peak values can appear in the pixel values of the image along the central axis of the finger, the left and right boundaries of the ROI area can be determined according to the abscissa where the two peak values are located, and meanwhile, the upper and lower boundaries of the ROI area can be determined according to the upper and lower boundaries of the finger. Specifically, first, a pixel value G 'x of the corrected image G' along the central axis is acquired i ][y](i∈[0,W-1]) Wherein W is the width of the image, and traversing the pixel points through the sliding windowObtaining the coordinates x of the highest and the next highest pixel values l 、x r I.e. the left and right boundaries of the ROI area. Then, the upper and lower finger boundaries y of the corrected image G' are obtained according to step S1-2 up 、y down To determine the upper boundary of the ROI area of the vein imageAnd a lower boundaryH is the width of the image and,representing the upper and lower boundaries of the finger at the horizontal position h of the image, h ∈ [0]。
And S1-6, intercepting a region of interest from the corrected image G' according to the boundary coordinates of the ROI, and normalizing the size of the intercepted image to be 300 multiplied by 100.
And S1-7, dividing a finger vein data set based on an open set protocol, specifically referring to FIG. 1, taking a sample of a half of the finger vein category as a training set, and taking the sample as a verification set and a test set (1.
In step 2, the deep convolutional neural network for finger vein recognition constructed by the invention comprises a feature extraction network and a classification network. In the specific embodiment, the deep convolutional neural network is constructed by using a pytorch deep learning framework. The feature extraction network is based on a ResNet34 framework, a convolution kernel of 7 x 7 is replaced by a small convolution kernel of 3 x 3, more local detail features are reserved, a residual error module is modified, namely before the BatchNorm is adjusted to convolution operation, the stable distribution of input features is guaranteed, the network can learn from the distribution of original data, a ReLU activation function is replaced by a Leaky ReLU activation function, mean value deviation in learning is avoided, a channel attention module SE is introduced to the tail end of the module, and the network can pay more attention to the features with large information. And performing L2 regularization on the final output characteristics of the network, so that cosine similarity can be conveniently calculated. The specific parameters are shown in the following table:
table 1 improved ResNet34 network architecture parameters
The classification network consists of two fully-connected layers, wherein the first layer comprises 512 neurons and uses a swish activation function, the second layer comprises n neurons and outputs a 1 x n matrix, and n is the number of training sample classes and represents the probability distribution of the classes to which the samples belong.
In step 3, training the deep convolutional neural network for finger vein recognition on the training set as follows:
order toA triplet representing the input of a pair of finger vein images,the samples of the anchor points are represented,a positive sample representing the same class as the anchor sample,representing negative samples of a different class than the anchor samples. f (x) i ) Representative feature extraction network extraction sample x i Is shown.
The model training process is as shown in FIG. 2, firstly, the finger vein image is enhanced and preprocessed by means of transformations of pytorch, including random rotation of [ -30 °,30 ° ]]Random contrast adjustment [0,2]. Then, a triplet of three image samples is formedInputting a feature extraction network to obtain a feature vector with 3 multiplied by 512 dimensions, and calculating a triple feature pairWithin class distance in European space And distance between classesFinally, each finger vein feature is represented as f (x) j ) Inputting the sample into a classification network to obtain sample class probability distribution, and calculating cosine similarity cos (f (x), w) of the sample characteristics f (x) and the last layer weight w of the classification network, wherein the cosine similarity calculation formula is as follows:
initializing parameters of the convolutional network by adopting an Xavier initialization method when training starts, wherein the batch size of the training is 64, calculating the loss value of the convolutional network in each training iteration, optimizing the parameters of the convolutional network by using Adam based on joint loss, the maximum epoch number of the training is 150, and selecting the weight with the minimum loss for storage.
In particular, the invention also provides a joint loss function, which enhances the characteristic representation capability of the network and improves the prediction effect. Referring to fig. 3, the joint loss used in training the finger vein recognition network in the present invention includes two parts, i.e., L = L 1 +L 2 . Wherein L is 1 Feature representation of constrained samples from Euclidean space, L 2 The feature representation of the samples is constrained in cosine space. L of the loss function 1 、L 2 The formula is as follows:
in which alpha and beta are regulatory factors forRegulating L 1 And L 2 The loss prevents the great difference between the two, i | · caly 2 2-norm representing vector, f (-) representing feature extraction function, x i (i belongs to N) represents the ith input finger vein image, N represents the number of samples in a batch, a, p and N respectively represent an anchor point sample, a positive sample which is the same as the anchor point sample and a negative sample which is not the same as the anchor point sample, and gamma represents the increasing interval of the inter-class distance and the intra-class distance; l is a radical of an alcohol 2 In θ yi Representing the ith sample feature and network weightThe angle between the vectors, s the scaling factor, m the added spacing between the weights and features, cos θ j As network weightAnd the cosine angle of the jth sample.
In step 4, the identification threshold is a decision boundary for distinguishing similar and dissimilar characteristics of the finger vein image, and the following steps are required:
s4-1, pairing the samples of the verification set divided in the step 1 in pairs, dividing the samples into a same type sample pair and a different type sample pair, and marking the data as 1 and 0; image sampleThe triples are three samples, the verification set is that every two samples are paired, and whether the samples are of the same type is judged according to the similarity between every two samples;
s4-2, after the convolutional network training is finished in the step 3, removing the classification network, reserving the feature extraction network, and loading the weight of the network by using the pytorech;
step S4-3, generating [ th ] by utilizing a trilateral library numpy l ,th r ]Threshold array { th) with 100 equal lengths in interval l ,th l+1 ,…,th r };
S4-4, inputting the sample pairs of the verification set into a feature extraction network, and obtaining two feature expression vectors v of each sample pair i And v j . Since the vector is subject to L2 regularization,the length of the modulus is 1, and the inner product of the two is the cosine similarity. Calculating cosine similarity cos (v) of feature representation vector i ,v j ) Go through each threshold th in the threshold array i If cos (v) i ,v j )>th i Then output 1 represents the net prediction v i And v j Belong to the same class, otherwise output 0 indicates not belong to the same class.
Step S4-5, counting the result predicted by the model and the actual label, and evaluating the performance of the feature extraction network through the error rejection rate (FRR) and the error acceptance rate (FAR), wherein the calculation formula is as follows:
according to the data label of S4-1, TP represents that the actual label is 1, the model prediction result is also 1, FN represents that the actual label is 1, the model prediction result is 0, FP represents that the actual label is 0, TN represents that the actual label is 0, and the model prediction result is 1.
And S4-6, traversing all elements in the threshold array to obtain corresponding FAR and FRR, and selecting the threshold value when the FAR is equal to the FRR as the optimal recognition threshold value T.
In step 5, the network is extracted based on the finger vein features to output the prediction result of the image to be distinguished:
s5-1, dividing finger vein images of the test set into a registration set and a query set, and acquiring feature representation v of the finger veins of the registered finger vein images based on a finger vein feature extraction network r Additionally storing the feature representation as a template in a feature database;
s5-2, after the registration is finished, entering a query stage, and taking out a feature matrix M of a feature database;
s5-3, obtaining a feature representation v of the finger vein image to be distinguished in the query set through a feature extraction network u And v is broadcast through the python broadcast mechanism u Expanding to the dimension same as the feature matrix M and marking as V;
s5-4, calculating the transposition V represented by the sample feature to be distinguished T And setting the diagonal element as 0 according to the cosine similarity of the feature matrix M, comparing the maximum similarity theta in the matrix with the optimal recognition threshold T, judging whether the sample to be judged is registered or not, if the maximum similarity theta is greater than the recognition threshold T, indicating that the sample to be judged is registered, and inquiring corresponding information according to the index of the feature matrix where the sample to be judged is located.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.
Claims (6)
1. A finger vein identification method based on joint loss and a convolutional neural network is characterized by comprising the following steps:
step 1, collecting a finger vein image, preprocessing the finger vein image to obtain a finger vein sample, labeling a classification label, and establishing a finger vein data set; the vein data in the vein data set are in a triple form and comprise an anchor point sample, a positive sample of the same category as the anchor point sample and a negative sample of different category from the anchor point sample;
step 2, constructing and training a deep convolution neural network for finger vein recognition:
the deep convolutional neural network comprises a feature extraction network and a classification network; the characteristic extraction network is used for extracting vein characteristic vectors from the finger vein samples and inputting the vein characteristic vectors into the classification network; the classification network is used for receiving the vein feature vector and outputting a finger vein identification result;
the feature extraction network is based on a joint loss optimization network during training; joint loss function L = L 1 +L 2 Wherein, L 1 To constrain the feature representation of the sample from Euclidean space, L 2 Constraining in cosine spaceCharacterization of the samples:
wherein alpha and beta are regulatory factors, respectively for the regulation of L 1 And L 2 The loss prevents the two from being too different, | · non-woven counting 2 2-norm representing vector, f (-) representing feature extraction function, x i (i e N) represents the ith input finger vein image, N represents the number of samples of one batch,respectively representing an anchor point sample a of the ith input finger vein image, a positive sample p which is the same as the anchor point sample and a negative sample n which is different from the anchor point sample, wherein gamma represents the increasing interval of the inter-class distance and the intra-class distance; l is 2 In (1),for the ith sample feature and network weightThe angle between the vectors, s the scaling factor, m the added spacing between the weights and features, θ j Representing weightsAnd the included angle between the jth sample (i is not equal to j) vector, wherein i and j are sample serial numbers;
in the training stage, the classification network updates the weights of the feature extraction network and the classification network by predicting the class of the sample and by means of the error between the predicted value and the true value and by means of a back propagation algorithm;
step 3, calculating an identification threshold value:
after the deep convolutional neural network training is finished, removing the classification network and reserving the feature extraction network; extracting features on a finger vein data set by using a feature extraction network to represent, measuring the distance between the extracted features and the corresponding vein sample features in the data set by using cosine similarity, and comparing the distance with a given threshold value to judge whether the two features are similar; and determining the value of the identification threshold according to the similarity judgment result. Specifically, the false rejection rate FRR and the false acceptance rate FAR can be obtained through the similarity judgment result to evaluate the feature extraction network effect and determine the value of the identification threshold.
And 4, outputting a prediction result of the image to be distinguished based on the finger vein feature extraction network:
for a registered finger vein image, acquiring feature representation of the registered finger vein based on a finger vein feature extraction network, and storing the feature representation as a registration template in a feature database;
and acquiring the feature representation of the finger vein to be judged through the feature extraction network for the inquired finger vein image to be judged, calculating the cosine similarity between the feature representation and a template in the feature database, comparing the cosine similarity with an identification threshold, judging whether the finger vein is registered or not, and finishing identity identification.
2. The method of claim 1, wherein the image pre-processing comprises region of interest ROI extraction and image size normalization of the finger vein image.
3. The method as claimed in claim 2, wherein the ROI extraction of the region of interest of the finger vein image is to adopt a sobel operator to obtain the boundary of the finger, to fit the axis of the finger to correct the rotation of the finger based on the least square method, and to intercept the ROI of the image region of interest of the knuckle based on the positioning of the knuckle in the knuckle cavity of the finger.
4. The method as claimed in claim 3, wherein the specific method for obtaining the finger boundary by using the sobel operator is as follows:
firstly, coarsely filtering background information of the finger vein image:
wherein A represents an original image, G x Indicating the gradient of the finger vein image in the X-direction, G y Indicating the gradient of the finger vein image in the Y direction;
and then calculating the communication number of non-zero pixels in the finger vein image finger subjected to rough filtering through a parallel set, setting the communication block with the communication number less than 5 as zero, and obtaining a binary image of the finger boundary.
5. The method of claim 4, wherein the least squares based fit of the axes of the finger to the rotational correction of the finger is performed by:
according to the coordinate y of the upper and lower boundaries of the finger on the binary image up 、y down Calculating the coordinates of the central axis of the fingerFitting the central axis of the finger based on a least square method;
according to the fitted finger central axis finger rotation angle theta, carrying out rotation correction on the finger vein image by using the finger rotation angle theta to obtain a corrected image G', and obtaining the finger rotation angle
Wherein, y 1 、y 2 Is the ordinate, x, of any two points on the axis of the finger 1 、x 2 Is the corresponding abscissa.
6. The method of claim 1 wherein the feature extraction network uses a modified ResNet34 structure, the 7 x 7 convolution kernel in ResNet34 is replaced by a 3 x 3 convolution kernel, and the batch normalization module BatchNorm in the residual module is adjusted before the convolution operation; replacing the ReLU activation function with a Leaky ReLu function; introducing a channel attention module before the output of the residual module; and performing L2 regularization on the final output characteristics of the network.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211053355.8A CN115797987A (en) | 2022-08-31 | 2022-08-31 | Finger vein identification method based on joint loss and convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211053355.8A CN115797987A (en) | 2022-08-31 | 2022-08-31 | Finger vein identification method based on joint loss and convolutional neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115797987A true CN115797987A (en) | 2023-03-14 |
Family
ID=85431652
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211053355.8A Pending CN115797987A (en) | 2022-08-31 | 2022-08-31 | Finger vein identification method based on joint loss and convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115797987A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117496562A (en) * | 2024-01-02 | 2024-02-02 | 深圳大学 | Finger vein recognition method and device based on FV-MViT and related medium |
-
2022
- 2022-08-31 CN CN202211053355.8A patent/CN115797987A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117496562A (en) * | 2024-01-02 | 2024-02-02 | 深圳大学 | Finger vein recognition method and device based on FV-MViT and related medium |
CN117496562B (en) * | 2024-01-02 | 2024-03-29 | 深圳大学 | Finger vein recognition method and device based on FV-MViT and related medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110543822A (en) | finger vein identification method based on convolutional neural network and supervised discrete hash algorithm | |
Zhang et al. | Combining global and minutia deep features for partial high-resolution fingerprint matching | |
Zhang et al. | Graph fusion for finger multimodal biometrics | |
Liu et al. | Finger vein recognition with superpixel-based features | |
CN110555380A (en) | Finger vein identification method based on Center Loss function | |
Gupta et al. | Fingerprint indexing schemes–a survey | |
CN112597812A (en) | Finger vein identification method and system based on convolutional neural network and SIFT algorithm | |
Krishneswari et al. | A review on palm print verification system | |
CN115995121A (en) | Multi-mode biological identification method based on attention module | |
CN114648445B (en) | Multi-view high-resolution point cloud splicing method based on feature point extraction and fine registration optimization | |
CN113095158A (en) | Handwriting generation method and device based on countermeasure generation network | |
CN114022914B (en) | Palmprint recognition method based on fusion depth network | |
CN115797987A (en) | Finger vein identification method based on joint loss and convolutional neural network | |
CN109523484B (en) | Fractal feature-based finger vein network repair method | |
Shen et al. | CNN-based high-resolution fingerprint image enhancement for pore detection and matching | |
CN103942545A (en) | Method and device for identifying faces based on bidirectional compressed data space dimension reduction | |
CN116884045B (en) | Identity recognition method, identity recognition device, computer equipment and storage medium | |
Fang et al. | Deep belief network based finger vein recognition using histograms of uniform local binary patterns of curvature gray images | |
Lv et al. | Research on fingerprint feature recognition of access control based on deep learning | |
Jeyanthi et al. | Neural network based automatic fingerprint recognition system for overlapped latent images | |
CN116342968B (en) | Dual-channel face recognition method and device | |
Bharadi et al. | Multi-modal biometric recognition using human iris and dynamic pressure variation of handwritten signatures | |
Latha et al. | A novel method for person authentication using retinal images | |
CN111898400A (en) | Fingerprint activity detection method based on multi-modal feature fusion | |
CN115795394A (en) | Biological feature fusion identity recognition method for hierarchical multi-modal and advanced incremental learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |