CN110555380A - Finger vein identification method based on Center Loss function - Google Patents
Finger vein identification method based on Center Loss function Download PDFInfo
- Publication number
- CN110555380A CN110555380A CN201910694163.7A CN201910694163A CN110555380A CN 110555380 A CN110555380 A CN 110555380A CN 201910694163 A CN201910694163 A CN 201910694163A CN 110555380 A CN110555380 A CN 110555380A
- Authority
- CN
- China
- Prior art keywords
- finger vein
- finger
- image
- loss function
- network model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/14—Vascular patterns
Abstract
the invention discloses a finger vein recognition method based on a center Loss function, wherein the finger vein recognition method comprises the following steps: collecting finger vein images; preprocessing a finger vein image, including correction of the finger vein and ROI extraction; a center Loss function is introduced into the Resnet network, the center Loss function and the softmax Loss function are used as joint supervision signals to extract finger vein features, so that the network learns the features with judgment capability, network parameters are modified to reduce the dimension of feature vectors, and the calculated amount and the storage space are reduced. And storing the feature vector subjected to the dimensionality reduction into a finger vein database as a registration template of the finger vein, and searching the finger vein image to be identified in the finger vein database to obtain a matching result. In the embodiment of the invention, the neural network learns the characteristics with discrimination capability, so that the unknown finger vein types can be identified, the dimension of the characteristics is reduced, the storage space of the template is reduced, and the operation speed is improved.
Description
Technical Field
The invention relates to the fields of biological feature recognition technology, image recognition and deep learning, in particular to a finger vein recognition method based on a Center Loss function.
background
The rapid development of information technology brings great convenience to the life of people, and meanwhile, the problem of information security is increasingly prominent. The human biological characteristics are difficult to copy and cannot be lost, so that the human biological characteristics have higher stability and safety. In recent years, biometric identification has been widely used in the fields of identity authentication and information security. The biological characteristics mainly comprise: fingerprints, faces, irises, finger veins, etc. The iris identification technology is high in acquisition cost, equipment is difficult to miniaturize, eyes of people are directly irradiated through instruments, user experience is poor, and certain obstacles are brought to acquisition work. Although the face recognition technology is easy to collect, twin twins cannot be recognized, the face features are unstable, and the face recognition technology can be counterfeited in modes of makeup, face lifting and the like, so that the accuracy is reduced. The fingerprint identification technique discerns through the touch mode, and is high to the environmental requirement, and finger humidity, wearing and tearing can cause the effect that can't discern, and the fingerprint vestige is saved easily moreover, exists the possibility of being duplicated, has reduced the security.
Vein recognition is an emerging biometric technology. In biomedicine, near infrared light with a spectrum ranging from 700nm to 900nm can penetrate through the skin of a finger and be absorbed by hemoglobin in venous blood, so that the near infrared rays are less projected to a vein part, and muscles, bones and other parts of the finger are weakened, thereby forming a finger vein blood vessel influence image. Medical research has shown that the vein lines of each person's finger are unique, even in the case of a monozygotic twin, between the different fingers of the individual, and that each person's vein features have characteristics that persist after adulthood, so that it can uniquely identify a person. The finger vein is identified in a non-contact mode, the acquisition is convenient, the equipment cost is low, the user acceptance is high, the finger vein is hidden inside a human finger, and the possibility of leakage and imitation does not exist based on the condition of living body identification.
The finger vein has the characteristics of high safety, high accuracy, uniqueness, non-contact property, small sample file and the like. In recent years, finger vein recognition technology has become a hot research point in the field of biometric identification. The finger vein recognition technology mainly performs recognition through the following steps: ROI extraction of finger vein images, image enhancement, feature extraction and matching. The ROI extraction can effectively remove background interference and noise in the finger vein image and extract a region with clear vein features. The image enhancement algorithm is used for enhancing vein lines in the image, improving the contrast of the vein relative to the background and reducing the influence of noise on the image. The feature extraction algorithm expresses the features of the image by using feature vectors through a traditional algorithm or a deep learning method. The matching algorithm can measure the similarity of the two feature vectors to be matched, and judge whether the two finger veins to be matched belong to the same person.
The problems to be solved at present are: the quality of pictures collected by the finger vein collector is generally low, the finger vein features extracted by the traditional algorithm are unstable, pseudo veins are easy to appear, and the like.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a finger vein recognition method based on a Center Loss function.
The invention provides a finger vein identification method based on a Center Loss function, which comprises the following specific scheme:
A. Connecting a finger vein acquisition device to acquire a finger vein image;
B. Carrying out rotation correction on the finger vein image, determining a region of interest (ROI), and extracting an ROI image;
C. adopting a Resnet network model for extracting a characteristic vector of the ROI image, taking a joint monitoring signal as a loss function, and optimizing network model parameters to obtain a trained parameter file;
D. loading Resnet network model, reading the trained parameter file, inputting the ROI image obtained in the step B into the Resnet network model to obtain the characteristic vector corresponding to each static finger vein image, normalizing, and converting the characteristic vector into unit characteristic vectorthe purpose of the normalization process is to limit the distance between any two feature vectors within a specific range, the maximum distance between two unit feature vectors is 2, and the minimum distance is 0;
E. the unit characteristic vector obtained in the step Das registration template of finger vein, storing the registration template in finger vein databasein (1),And searching and identifying the finger vein image to be identified based on the Euclidean distance.
further, the step a specifically comprises:
A1, connecting the finger vein collector with a client, and installing a driver of the collector on the client;
A2, collecting finger vein images by a finger vein collector according to instructions of a client interface;
a3, the knuckle direction is the x-axis direction, and the finger tip direction is the positive y-axis direction.
Further, the step B specifically includes:
b1, carrying out Gaussian denoising on the static finger pulse image acquired in the step A, and removing noise interference;
B2, performing edge detection on the finger vein image after noise interference removal, solving the gradient in the x-axis direction by adopting a Sobel operator to obtain a gray level image of edge detection, removing noise by utilizing binarization, and extracting a finger contour line;
b3, thinning the finger contour line by adopting a Hilditch algorithm to obtain a thinned finger contour line;
B4, the finger contour line after the thinning processing contains a plurality of interference straight lines besides the contour line, the influence of the interference straight lines needs to be eliminated, and the interference straight lines in the finger contour line after the thinning processing are further removed to obtain the finger contour line of a single pixel;
b5, fitting a neutral line through the finger contour line of the single pixel, wherein the included angle alpha between the neutral line and the vertical direction is taken as the angle of rotation correction;
B6, performing rotation correction on the finger contour line of a single pixel according to the included angle alpha, and taking the width of the internal tangent line of the finger contour line in the vertical direction as the maximum width W of the segmentation finger vein image;
b7, performing rotation correction on the finger vein image according to the included angle alpha, and segmenting the finger vein image by using the internal tangent of the finger contour line in the vertical direction to obtain an internal tangent segmentation image;
B8, using the position where the peak appears in the pixel gray value distribution curve of each column in the internal tangent segmentation image as the position of the transverse tangent, determining the region of interest ROI, and extracting the ROI image from the internal tangent segmentation image.
Further, the step C specifically includes:
C1, establishing a Resnet network model, and initializing network parameters; adding a full connection layer before a full connection layer of a Resnet network model;
C2, fusing a Softmax Loss function and a Center Loss function to serve as a Loss function of the Resnet network model, wherein the Center Loss function enables intra-class features to be aggregated, and the Softmax Loss function enables inter-class features to be separated; using the feature vector output by the full connection layer added in the step C1 to update a Center Loss function, and introducing a hyper-parameter lambda to balance two Loss functions;
C3, inputting the ROI image extracted in the step B into a Resnet network model with an improved loss function, modifying parameters of a layer to reduce the dimension as the characteristic vector used for matching of the network is a full connection layer from the second last layer of the Resnet network model, and reducing the storage space and matching time of the characteristic vector to obtain a trained Resnet network model parameter file.
further, the step E specifically includes:
e1, collecting a finger vein image to be identified, extracting an ROI image to be identified according to the step B, and inputting the ROI image to be identified into the Resnet network model in the step D to obtain a unit feature vector to be matched;
e2, in the identification stage, the unit characteristic vector to be matched extracted in the step E1 is matched with the finger vein databaseRegistered unit feature vector of (1)sequentially calculating Euclidean distances; if the unit feature vector to be matched and the registered unit feature vectorif the Euclidean distance is smaller than the threshold value, the matching is judged to be successful;
E3, if the unit feature vector to be matched is not stored in the finger vein database, entering the registration stage, writing the unit feature vector obtained in the step E1 into the finger vein database, and repeating the step E2.
compared with the prior art, the invention has the beneficial effects that: compared with the prior art, the finger vein identification method based on the Center Loss function is adopted. Firstly, the invention provides a method for correcting a finger vein image and extracting an ROI (region of interest), which can eliminate interference caused by finger rotation and extract an area with obvious veins. Meanwhile, the finger vein features are extracted by using a convolutional neural network, a Center Loss function and a Softmax Loss function are used as joint supervision signals, the Softmax Loss function separates features between classes, the Center Loss function enables features in the classes to be aggregated, the feature vector of network learning has good discrimination, and effective judgment can be made when the class is unknown. On the premise of not reducing the accuracy of the model, parameters of the network full-connection layer are modified to reduce the output dimensionality of the feature vector, and the storage space of the feature template and the calculation time during matching are reduced. And performing unitization operation on the feature vectors to convert the output feature vectors into unit vectors so as to limit the distance range of template matching and facilitate the selection of a threshold value during matching. Therefore, the method is a technical breakthrough for the traditional finger vein identification method.
Drawings
FIG. 1 is a diagram of the steps of the method;
FIG. 2 is a flowchart of vein correction and ROI extraction;
FIG. 3 is a schematic view of a vein correction;
FIG. 4 is a schematic view taken using the perpendicular internal tangent to the finger vein;
FIG. 5 is a schematic view taken using a finger vein cross-sectional line;
FIG. 6 is a diagram of a network architecture;
Fig. 7 is a vein matching flow chart.
Detailed Description
the invention is described in detail below with reference to the drawings and specific embodiments, but the invention is not limited thereto.
Referring to fig. 1, the method comprises the following implementation steps:
A. connecting the finger vein acquisition equipment to acquire finger vein images
And connecting the client with the finger vein collection equipment, and installing a driver required by the equipment on the client. And placing the finger at a corresponding position according to the requirement of the acquisition equipment, naturally placing the finger, and waiting for the acquisition equipment to acquire the finger vein image.
B. Carrying out rotation correction and ROI interception on finger vein images to extract interested regions
As shown in the flowchart of the finger vein correction and ROI extraction shown in fig. 2 and the schematic diagram of the finger vein correction shown in fig. 3, firstly, the acquired finger vein image shown in fig. 3(a) is subjected to gaussian denoising to remove noise interference, so as to obtain a gaussian-filtered image shown in fig. 3 (b); performing edge detection on the finger vein, solving the gradient in the x-axis direction by using a Sobel operator due to the severe gradient change in the horizontal direction of the finger vein image to obtain a gray level image of the edge detection, obtaining an edge-extracted image (c), and extracting a finger contour line by using binarization to remove noise to obtain an image (d) of the image (c); performing thinning processing on the outline of the finger, extracting a skeleton from the binary image, and performing thinning processing by adopting a Hilditch algorithm, wherein lines in the image are all changed into single pixels as shown in FIG. 3 (e); the pictures after the thinning processing contain a plurality of interference straight lines besides the contour lines, the influence of the interference straight lines needs to be eliminated, and only the finger contour lines are reserved; the vein line areas between the finger contour lines become backgrounds (areas with pixel points of 0) after gradient detection because the gradient change is gentle; by utilizing the characteristic, starting from the middle area to the left side and the right side of each line of the vein image, the encountered first pixel with the pixel value of 255 is the pixel point of the contour line, all the lines are traversed, the pixel points meeting the requirements are reserved, the pixel point sets are the pixel points of the finger contour line, and as shown in fig. 3(f), the interference line is removed; further, as shown in fig. 3(g), a median line is fitted through a single-pixel finger contour line, an included angle α between the median line and the vertical direction is used as an angle for rotation correction, and the gray map after denoising is corrected by using the angle α, so that fig. 3(h) is obtained.
the width w of the ROI area is obtained using the vertical internal tangent of the finger outline, as shown in fig. 4. First, the thinned picture with the unnecessary lines removed in fig. 4(a) is rotation-corrected according to the included angle α, so as to obtain the vertical internal tangents of the contour line in fig. 4(b), and the distance between the two internal tangents is taken as the maximum width W of the ROI region, as shown in fig. 4 (c). Further, an ROI image is extracted, as shown in FIG. 5, and the synovial fluid in the interphalangeal space has a density much lower than that of the phalanges, so that more infrared light can penetrate the phalangeal joints than other parts of the finger, and the image brightness of the phalangeal joint area is higher after infrared imaging, as shown in FIG. 5 (a). The highlight part in fig. 5(a) is cut by using the inscribed line in the vertical direction of the contour line to obtain fig. 5(b), and the position where the peak appears in the distribution curve of the pixel gray value sum of each column in the image is used as the position h of the transverse tangent line of the finger vein. The ROI of fig. 5(b) is cut at the position h of the transverse tangent of the finger vein, and fig. 5(d) is obtained.
C. Extracting finger vein features by adopting Resnet as a network model and joint supervision signals as loss functions
Simple neural network models do not satisfy increasingly complex recognition tasks, which requires deep-level network learning of more advanced features. But as the neural network deepens, the accuracy of the training set decreases. This phenomenon is not due to overfitting, but due to the increase in network depth, the accuracy becomes saturated and then degrades rapidly. To address this problem, the Resnet network model introduces a residual structure to address the accuracy degradation problem. Resnet proposes two kinds of mappings, respectively an identity mapping denoted x, a curved curve portion in the figure, a residual mapping denoted f (x) ═ h (x) -x, and a portion other than the curved curve in the figure, and the desired original mapping is re-denoted f (x) + x, which can be implemented by a feed-forward neural network with fast connections, which are connections that skip one or more layers. Two options are provided through the two kinds of mapping, so that the problem that the accuracy rate is reduced along with the deepening of the network is solved, when the accuracy rate of the network reaches the optimum, if the network is deepened continuously, the residual mapping is set to be 0, only the identity mapping is left, the network is in the optimum state all the time theoretically, and the accuracy rate of the network cannot be reduced along with the increase of the depth. The finger vein features extracted by the traditional algorithm are unstable, and pseudo veins are easy to appear, so that the neural network is adopted to extract the features of the picture. In the finger vein identification process, it is impractical to collect all possible samples in advance, and if the classical CNN model is used, the judgment can be performed only based on the existing category, and a new category cannot be identified, so that metric learning needs to be adopted. To reduce the intra-class distance of the feature vectors, a centrolos loss function is introduced to enhance the discriminative power of features in the neural network. The center loss function allows the neural network to learn a feature vector center for each class's feature attributes. And updating the center in each batch of the training process, and minimizing the distance between the center and the corresponding class feature vector to reduce the intra-class distance. The neural network is trained using softmax loss and center loss as joint supervisory signals. The softmax loss can divide the classes, and the center loss can aggregate the classes, so that the discrimination capability of the feature vector is enhanced. The method comprises the steps of using resnet18 as a basic network, loading a pre-training model of resnet18 to initialize network parameters, and using a Center Loss function and a Softmax Loss function as joint supervision signals to train the network. To update the Center Loss function, a fully-connected layer needs to be added before the network fully-connected layer, and the feature vector output by the fully-connected layer is used to update the Center Loss function. And B, inputting the finger vein picture of the ROI extracted in the step B into a neural network, and taking the output of the penultimate full-connected layer of the network as a characteristic vector of the finger vein for distance measurement by adopting a training mode the same as that of a classification network, wherein the structure is shown in figure 6.
D. Modifying the parameters of the last but one layer of the network full link layer to reduce the dimension and normalizing the characteristic vector
In order to solve the problem of dimension disaster of extracting finger vein features from the ResNet training model, dimension reduction needs to be carried out on output feature vectors. Since the feature vector used by the network for matching is a fully connected layer from the penultimate layer of the network, the parameters of the layer are modified for dimension reduction, and the storage space of the feature vector and the matching time are reduced. Through multiple experimental comparisons, the accuracy of the model is highest when the output dimensionality of the network is set to be 256 dimensions. The finger veins are measured and matched by Euclidean distance, and in order to limit the distance between any two vectors within a specific range, normalization operation needs to be performed on the feature vectors after dimension reduction, so that the feature vector corresponding to each picture is converted into a unit vector. The maximum distance between any two unit vectors is 2 and the minimum distance is 0, so that the threshold value range at the time of matching can be determined to be between [0,2 ].
E. Searching and identifying the feature vector after dimension reduction in a finger vein database based on Euclidean distance
as shown in FIG. 4, the retrieval and identification of the image to be identified in the finger vein database comprises acquisition,Extracting the characteristic vector, registering and storing the characteristic vector in a database, and matching the characteristic vector and the database. Firstly, the vein image of the finger is collected through the step A, and the collected vein image is further processed according to the algorithm in the steps B-D, and the feature vector is extracted. In the registration phase, thewriting to a databaseIn (1). In the recognition stage, take out in sequenceall vein feature vectors of, andthe Euclidean distance is respectively calculated by the vectors in the step (A), and the vein feature vector with the highest similarity is used as the identification result output process, and the steps are shown in figure 7.
the experimental results are as follows:
experiment 1, preprocessing is carried out on an FV-USM data set by adopting the rotation correction and ROI interception method, the processed data set is sent into a Resnet model for training, a Loss function adopts Softmax Loss and CenterLoss, the experiment accuracy is 97.74%, and the AUC is 99.71%;
Experiment 2, an unprocessed FV-SUM data set is adopted to be sent into the same network for training, the experiment accuracy is 92.34%, and the AUC is 97.54%;
experiment 3, the processed data set is sent into a Resnet model for training, the Loss function adopts Softmax Loss, the experiment accuracy is 97.28%, and the AUC is 99.58%.
Compared with the experiment 1 and the experiment 2, the accuracy of the FV-USM data set after pretreatment is improved by 5.40 percent compared with that before pretreatment, the AUC is improved by 1.17 percent, and the effectiveness of the rotation correction and ROI interception algorithm is verified; compared with the experiment 1 and the experiment 3, the comparison shows that after the Center Loss function is introduced, the accuracy of the FV-USM is improved by 0.46%, the AUC is improved by 0.13%, and the fact that the discrimination of the finger vein features is improved by the Center Loss function is verified.
Claims (5)
1. A finger vein identification method based on a Center Loss function is characterized by comprising the following steps:
A. connecting a finger vein acquisition device to acquire a finger vein image;
B. Performing rotation correction on the finger vein image, determining a region of interest (ROI), and extracting an ROI image;
C. Adopting a Resnet network model for extracting a characteristic vector of the ROI image, taking a joint supervision signal as a loss function, and optimizing network model parameters to obtain a trained parameter file;
D. Loading Resnet network model, reading the trained parameter file, inputting the ROI image obtained in the step B into the Resnet network model to obtain the characteristic vector corresponding to each static finger vein image, normalizing, and converting the characteristic vector into unit characteristic vectorunit feature vectoras registration template of finger vein, storing the registration template in finger vein databasein which the registration is completed, wherein representing a unit characteristic vector corresponding to the ith finger vein image;
E. and retrieving and identifying the finger vein image to be identified based on the Euclidean distance.
2. The method of claim 1, wherein the step a is specifically as follows:
A1, connecting the finger vein collector with a client, and installing a driver of the collector on the client;
a2, collecting finger vein images by a finger vein collector according to instructions of a client interface;
a3, the knuckle direction is the x-axis direction, and the finger tip direction is the positive y-axis direction.
3. The method of claim 1, wherein the step B specifically comprises:
B1, carrying out Gaussian denoising on the static finger pulse image acquired in the step A, and removing noise interference;
b2, performing edge detection on the finger vein image after noise interference removal, solving the gradient in the x-axis direction by adopting a Sobel operator to obtain a gray level image of edge detection, removing noise by utilizing binarization, and extracting a finger contour line;
b3, performing thinning treatment on the finger contour line by adopting a Hilditch algorithm to obtain a thinned finger contour line;
b4, removing the interference straight line in the finger contour line after thinning to obtain the finger contour line of a single pixel;
b5, fitting a neutral line through the finger contour line of the single pixel, wherein the included angle alpha between the neutral line and the vertical direction is used as the angle of rotation correction;
B6, performing rotation correction on the finger contour line of a single pixel according to the included angle alpha, and taking the width of the internal tangent line of the finger contour line in the vertical direction as the maximum width W of the segmentation finger vein image;
B7, performing rotation correction on the finger vein image according to the included angle alpha, and segmenting the finger vein image by utilizing an internal tangent line in the vertical direction of the finger contour line to obtain an internal tangent line segmentation image;
B8, using the position where the peak appears in the pixel gray value distribution curve of each column in the internal tangent segmentation image as the position of the transverse tangent, determining the region of interest ROI, and extracting the ROI image from the internal tangent segmentation image.
4. The method of claim 1, wherein the step C specifically comprises:
C1, establishing a Resnet network model, and initializing network parameters; adding a full connection layer before a full connection layer of a Resnet network model;
c2, fusing a Softmax Loss function and a Center Loss function as a Loss function of the Resnet network model; using the feature vector output by the full connection layer added in the step C1 to update a Center Loss function, and introducing a hyper-parameter lambda to balance two Loss functions;
and C3, inputting the ROI image extracted in the step B into a Resnet network model of an improved loss function, and optimizing network model parameters to obtain a trained parameter file.
5. The finger vein recognition method based on the Center Loss function according to claim 1, wherein the step E specifically comprises:
e1, collecting a finger vein image to be identified, extracting an ROI image to be identified according to the step B, and inputting the ROI image to be identified into the Resnet network model in the step D to obtain a unit feature vector to be matched;
e2, in the identification stage, the unit characteristic vector to be matched extracted in the step E1 is matched with the finger vein databaseregistered unit feature vector of (1)sequentially calculating Euclidean distances; if the unit feature vector to be matched and the registered unit feature vectorif the Euclidean distance is less than the threshold value, the matching is judged to be successful;
E3, if the unit feature vector to be matched is not stored in the finger vein database, entering a registration stage optionally, writing the unit feature vector obtained in the step E1 into the finger vein database, and repeating the step E2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910694163.7A CN110555380A (en) | 2019-07-30 | 2019-07-30 | Finger vein identification method based on Center Loss function |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910694163.7A CN110555380A (en) | 2019-07-30 | 2019-07-30 | Finger vein identification method based on Center Loss function |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110555380A true CN110555380A (en) | 2019-12-10 |
Family
ID=68736728
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910694163.7A Pending CN110555380A (en) | 2019-07-30 | 2019-07-30 | Finger vein identification method based on Center Loss function |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110555380A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368747A (en) * | 2020-03-06 | 2020-07-03 | 上海掌腾信息科技有限公司 | System and method for realizing palm vein characteristic correction processing based on TOF technology |
CN111639558A (en) * | 2020-05-15 | 2020-09-08 | 圣点世纪科技股份有限公司 | Finger vein identity verification method based on ArcFace Loss and improved residual error network |
CN111950454A (en) * | 2020-08-12 | 2020-11-17 | 辽宁工程技术大学 | Finger vein identification method based on bidirectional feature extraction |
CN112580590A (en) * | 2020-12-29 | 2021-03-30 | 杭州电子科技大学 | Finger vein identification method based on multi-semantic feature fusion network |
CN113298055A (en) * | 2021-07-27 | 2021-08-24 | 深兰盛视科技(苏州)有限公司 | Vein identification method, vein identification device, vein identification equipment and computer readable storage medium |
CN114863499A (en) * | 2022-06-30 | 2022-08-05 | 广州脉泽科技有限公司 | Finger vein and palm vein identification method based on federal learning |
WO2024037053A1 (en) * | 2022-08-18 | 2024-02-22 | 荣耀终端有限公司 | Fingerprint recognition method and apparatus |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107609497A (en) * | 2017-08-31 | 2018-01-19 | 武汉世纪金桥安全技术有限公司 | The real-time video face identification method and system of view-based access control model tracking technique |
CN108009520A (en) * | 2017-12-21 | 2018-05-08 | 东南大学 | A kind of finger vein identification method and system based on convolution variation self-encoding encoder neutral net |
CN108446729A (en) * | 2018-03-13 | 2018-08-24 | 天津工业大学 | Egg embryo classification method based on convolutional neural networks |
CN109492556A (en) * | 2018-10-28 | 2019-03-19 | 北京化工大学 | Synthetic aperture radar target identification method towards the study of small sample residual error |
CN109815869A (en) * | 2019-01-16 | 2019-05-28 | 浙江理工大学 | A kind of finger vein identification method based on the full convolutional network of FCN |
CN109902732A (en) * | 2019-02-22 | 2019-06-18 | 哈尔滨工业大学(深圳) | Automobile automatic recognition method and relevant apparatus |
-
2019
- 2019-07-30 CN CN201910694163.7A patent/CN110555380A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107609497A (en) * | 2017-08-31 | 2018-01-19 | 武汉世纪金桥安全技术有限公司 | The real-time video face identification method and system of view-based access control model tracking technique |
CN108009520A (en) * | 2017-12-21 | 2018-05-08 | 东南大学 | A kind of finger vein identification method and system based on convolution variation self-encoding encoder neutral net |
CN108446729A (en) * | 2018-03-13 | 2018-08-24 | 天津工业大学 | Egg embryo classification method based on convolutional neural networks |
CN109492556A (en) * | 2018-10-28 | 2019-03-19 | 北京化工大学 | Synthetic aperture radar target identification method towards the study of small sample residual error |
CN109815869A (en) * | 2019-01-16 | 2019-05-28 | 浙江理工大学 | A kind of finger vein identification method based on the full convolutional network of FCN |
CN109902732A (en) * | 2019-02-22 | 2019-06-18 | 哈尔滨工业大学(深圳) | Automobile automatic recognition method and relevant apparatus |
Non-Patent Citations (1)
Title |
---|
王俊茜: ""基于多任务联合监督学习的行人再识别研究"", 《中国优秀硕士论文全文数据库信息科技辑》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368747A (en) * | 2020-03-06 | 2020-07-03 | 上海掌腾信息科技有限公司 | System and method for realizing palm vein characteristic correction processing based on TOF technology |
CN111639558A (en) * | 2020-05-15 | 2020-09-08 | 圣点世纪科技股份有限公司 | Finger vein identity verification method based on ArcFace Loss and improved residual error network |
CN111639558B (en) * | 2020-05-15 | 2023-06-20 | 圣点世纪科技股份有限公司 | Finger vein authentication method based on ArcFace Loss and improved residual error network |
CN111950454A (en) * | 2020-08-12 | 2020-11-17 | 辽宁工程技术大学 | Finger vein identification method based on bidirectional feature extraction |
CN111950454B (en) * | 2020-08-12 | 2024-04-02 | 辽宁工程技术大学 | Finger vein recognition method based on bidirectional feature extraction |
CN112580590A (en) * | 2020-12-29 | 2021-03-30 | 杭州电子科技大学 | Finger vein identification method based on multi-semantic feature fusion network |
CN112580590B (en) * | 2020-12-29 | 2024-04-05 | 杭州电子科技大学 | Finger vein recognition method based on multi-semantic feature fusion network |
CN113298055A (en) * | 2021-07-27 | 2021-08-24 | 深兰盛视科技(苏州)有限公司 | Vein identification method, vein identification device, vein identification equipment and computer readable storage medium |
CN114863499A (en) * | 2022-06-30 | 2022-08-05 | 广州脉泽科技有限公司 | Finger vein and palm vein identification method based on federal learning |
CN114863499B (en) * | 2022-06-30 | 2022-12-13 | 广州脉泽科技有限公司 | Finger vein and palm vein identification method based on federal learning |
WO2024037053A1 (en) * | 2022-08-18 | 2024-02-22 | 荣耀终端有限公司 | Fingerprint recognition method and apparatus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110555380A (en) | Finger vein identification method based on Center Loss function | |
CN108009520B (en) | Finger vein identification method and system based on convolution variational self-encoder network | |
CN102542281B (en) | Non-contact biometric feature identification method and system | |
Raja | Fingerprint recognition using minutia score matching | |
CN100492400C (en) | Matching identification method by extracting characters of vein from finger | |
Joshi et al. | Latent fingerprint enhancement using generative adversarial networks | |
CN110543822A (en) | finger vein identification method based on convolutional neural network and supervised discrete hash algorithm | |
CN111126240B (en) | Three-channel feature fusion face recognition method | |
CN109190460B (en) | Hand-shaped arm vein fusion identification method based on cumulative matching and equal error rate | |
Dabouei et al. | ID preserving generative adversarial network for partial latent fingerprint reconstruction | |
CN109583279A (en) | A kind of fingerprint and refer to that vein combines recognizer | |
CN112597812A (en) | Finger vein identification method and system based on convolutional neural network and SIFT algorithm | |
CN114821682B (en) | Multi-sample mixed palm vein identification method based on deep learning algorithm | |
Kassem et al. | An enhanced ATM security system using multimodal biometric strategy | |
Benziane et al. | Dorsal hand vein identification based on binary particle swarm optimization | |
CN107122710B (en) | Finger vein feature extraction method based on scattering convolution network | |
CN115797987A (en) | Finger vein identification method based on joint loss and convolutional neural network | |
Abdulbaqi et al. | Biometrics detection and recognition based-on geometrical features extraction | |
CN114973307A (en) | Finger vein identification method and system for generating countermeasure and cosine ternary loss function | |
Rajbhoj et al. | An Improved binarization based algorithm using minutiae approach for Fingerprint Identification | |
Ren et al. | A linear hybrid classifier for fingerprint segmentation | |
Guo et al. | A novel algorithm of dorsal hand vein image segmentation by integrating matched filter and local binary fitting level set model | |
CN112801034A (en) | Finger vein recognition device | |
Zhu et al. | Palmprint recognition based on PFI and fuzzy logic | |
Spasova et al. | An Algorithm for Detecting the Location and Parameters of the Iris in the Human Eye |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191210 |
|
RJ01 | Rejection of invention patent application after publication |