CN112560710A - Method for constructing finger vein recognition system and finger vein recognition system - Google Patents
Method for constructing finger vein recognition system and finger vein recognition system Download PDFInfo
- Publication number
- CN112560710A CN112560710A CN202011508436.3A CN202011508436A CN112560710A CN 112560710 A CN112560710 A CN 112560710A CN 202011508436 A CN202011508436 A CN 202011508436A CN 112560710 A CN112560710 A CN 112560710A
- Authority
- CN
- China
- Prior art keywords
- finger vein
- finger
- vein
- training
- venation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 210000003462 vein Anatomy 0.000 title claims abstract description 393
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000012549 training Methods 0.000 claims abstract description 103
- 239000013598 vector Substances 0.000 claims abstract description 96
- 238000000605 extraction Methods 0.000 claims abstract description 86
- 238000007781 pre-processing Methods 0.000 claims abstract description 50
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 47
- 238000013508 migration Methods 0.000 claims abstract description 6
- 230000005012 migration Effects 0.000 claims abstract description 6
- 238000012360 testing method Methods 0.000 claims description 28
- 238000012795 verification Methods 0.000 claims description 21
- 210000003161 choroid Anatomy 0.000 claims description 12
- 230000006870 function Effects 0.000 claims description 12
- 238000002372 labelling Methods 0.000 claims description 10
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 238000013515 script Methods 0.000 claims description 5
- 238000004590 computer program Methods 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 3
- 230000004044 response Effects 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 14
- 230000000694 effects Effects 0.000 description 13
- 238000013135 deep learning Methods 0.000 description 9
- 238000013528 artificial neural network Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 7
- 238000011176 pooling Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 239000000284 extract Substances 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000012546 transfer Methods 0.000 description 4
- 241000208011 Digitalis Species 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000013526 transfer learning Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000003062 neural network model Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 102000001554 Hemoglobins Human genes 0.000 description 1
- 108010054147 Hemoglobins Proteins 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- QVGXLLKOCUKJST-UHFFFAOYSA-N atomic oxygen Chemical compound [O] QVGXLLKOCUKJST-UHFFFAOYSA-N 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000593 degrading effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 239000012535 impurity Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 229910052760 oxygen Inorganic materials 0.000 description 1
- 239000001301 oxygen Substances 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000035479 physiological effects, processes and functions Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/14—Vascular patterns
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Human Computer Interaction (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the invention provides a method for constructing a finger vein recognition system and the finger vein recognition system, wherein the method comprises the following steps: a1, constructing an initial finger vein recognition system, which comprises: the device comprises a preprocessing module, a feature extraction module adopting non-finger vein data set pre-training and a domain division algorithm module; a2, acquiring finger vein gray maps corresponding to a plurality of fingers, and training a preprocessing module to preprocess the finger vein gray maps to extract finger vein venation maps of the fingers; a3, extracting a finger vein feature vector of a finger according to the migration learning of a finger vein venation map by a training feature extraction module, wherein training data are in a triple form generated based on the finger vein venation map; a4, the training domain-division algorithm module establishes different recognition areas for different fingers according to the finger vein feature vectors, and analyzes the similarity of the target finger and the finger corresponding to each recognition area according to the finger vein feature vectors.
Description
Technical Field
The invention relates to the field of deep learning, in particular to the field of biological feature recognition by using a neural network, and more particularly relates to a method for constructing a finger vein recognition system and the finger vein recognition system.
Background
The finger vein recognition technology confirms the identity of the person through vein line recognition of fingers. The hemoglobin lacking oxygen in the vein can absorb near infrared rays, and when the near infrared rays irradiate the finger, a small camera with near infrared ray sensitivity is used for shooting the finger, so that the image of vein lines can be obtained.
At present, finger vein technology is widely applied to public domain identity authentication equipment, such as: the system comprises a bank ATM, an access control management system, a computer login, an automobile lock, a safe lock or an electronic payment and the like, which need to be authenticated.
Compared with other biological characteristic identification technologies, the finger vein identification technology has the advantages that the finger vein identification technology is obvious, compared with fingerprint identification, the finger vein is hidden in the body, the copying or stealing chance is small, and the psychological resistance of a user is low; compared with the face recognition, the human face recognition method has less interference by human physiology and external environment; compared with iris recognition with higher identification accuracy, finger vein recognition not only can achieve equivalent identification accuracy, but also has lower cost.
Most of the existing finger vein recognition algorithms and systems are based on the traditional image processing technology, the texture, brightness and geometric characteristics of an image are analyzed, and features to be extracted are designed manually. However, the problems of vessel vein fracture and difficult feature extraction caused by overexposure, insufficient definition, disordered background noise and the like exist in an image sample acquired by finger vein equipment, and the robustness of a finger vein identification algorithm with manually designed features is not high. The deep learning method can automatically extract the implicit distinguishable features in the images, and when the number of trained images is large enough, the very high image recognition performance can be obtained. Therefore, theoretically, a neural network can be trained through finger vein images to obtain a better recognition effect, but the number of samples of the currently available finger vein image data set is not large enough, so that the recognition accuracy of a finger vein recognition system obtained through direct training of the existing finger vein image data set is not high.
Disclosure of Invention
Therefore, the present invention is directed to overcoming the above-mentioned drawbacks of the prior art and providing a method for constructing a finger vein recognition system and a finger vein recognition system.
The purpose of the invention is realized by the following technical scheme:
according to a first aspect of the invention, a method for constructing a finger vein recognition system comprises: a1, constructing an initial finger vein recognition system, which comprises: the device comprises a preprocessing module, a feature extraction module adopting non-finger vein data set pre-training and a domain division algorithm module; a2, acquiring finger vein gray maps corresponding to a plurality of fingers, and training a preprocessing module to preprocess the finger vein gray maps to extract finger vein venation maps of the fingers; a3, extracting a finger vein feature vector of a finger according to the migration learning of a finger vein venation map by a training feature extraction module, wherein training data are in a triple form generated based on the finger vein venation map; a4, the training domain-division algorithm module establishes different recognition areas for different fingers according to the finger vein feature vectors, and analyzes the similarity of the target finger and the finger corresponding to each recognition area according to the finger vein feature vectors.
In some embodiments of the present invention, the step a2 includes: a21, randomly obtaining a part of finger vein gray maps in finger vein gray maps corresponding to a plurality of fingers to perform labeling operation to obtain label files in a json format, converting the label files into finger vein choroid maps through scripts, and forming choroid extraction training data by using the finger vein choroid maps as labels of the corresponding finger vein gray maps, wherein the finger vein choroid maps are binary images or gray images representing vein veins; a22, training a preprocessing module to extract a finger vein venation map according to the finger vein gray map based on venation extraction training data; a23, extracting a corresponding finger vein venation map according to the finger vein gray level maps of all fingers by using a trained preprocessing module; and A24, randomly dividing the extracted finger vein venation map into a training set, a verification set and a test set of the feature extraction module according to a preset proportion.
In some embodiments of the present invention, the step a3 includes: a31, generating triples in an online sample generation mode based on the extracted finger vein choroid map, wherein each triplet comprises an anchor sample, a positive sample and a negative sample, the positive sample and the anchor sample belong to the same finger, and the negative sample and the anchor sample belong to different fingers; and A32, training the feature extraction module under the guidance of the triple loss function to output finger vein feature vectors of the anchor sample, the positive sample and the negative sample based on the finger vein vena cava in the triple, so that the distance between the anchor sample and the positive sample calculated according to the output finger vein feature vectors is smaller than the distance between the anchor sample and the negative sample.
In some embodiments of the present invention, the step a31 includes: randomly selecting an anchor sample, a positive sample and a negative sample in each triplet during first training; and during subsequent training, randomly selecting an anchor sample and a positive sample of each triplet, calculating the similarity between all negative samples of the anchor sample in the triplet and the anchor sample based on the latest finger vein feature vector for any triplet, and taking the negative sample with the highest similarity as the negative sample of the triplet.
In some embodiments of the present invention, the range of the dimension of the finger vein feature vector extracted by the vein feature extraction module is 64-512 dimensions.
In some embodiments of the invention, step a4 includes: a41, randomly selecting a preset number of finger vein gray level graphs of each finger of the input person to construct a domain division algorithm model, and using the rest finger vein gray level graphs of each finger of the input person to test the false rejection rate of the constructed domain division algorithm model; a42, acquiring finger vein feature vectors of a preset number corresponding to the preset number of finger vein gray-scale maps of each finger; a43, averaging each dimension according to a preset number of finger vein feature vectors to obtain an average vector, and taking the average vector as the area center point of the identification area of the corresponding finger; a44, analyzing Euclidean distances between a central point of an area and each finger vein feature vector in a preset number of finger vein feature vectors, taking a median value of all Euclidean distances as an area radius of an identification area of the finger, and multiplying the area radius by an adjusting coefficient to be used as an actual radius; a45, constructing a domain division algorithm model according to the area center point, the area radius and the adjusting coefficient of each finger, analyzing the similarity of each target finger corresponding to the rest finger vein gray level images of each finger of the input personnel and the finger corresponding to each identification area based on the finger vein feature vectors, and if the similarity is matched, the target finger passes the verification; a46, testing the false rejection rate of the domain algorithm model according to the rest finger vein gray level images of each finger of the person who has been recorded, testing the false acceptance rate of the domain algorithm model according to the finger vein gray level images of the fingers of the person who has not been recorded, and adjusting the adjusting coefficient according to the false rejection rate and the false acceptance rate to meet the use requirement.
According to a second aspect of the present invention, there is provided a finger vein recognition system constructed by the method of the first aspect, comprising: the preprocessing module is used for preprocessing the finger vein gray level image of the target finger to extract a finger vein venation image of the finger; the characteristic extraction module is used for extracting a finger vein characteristic vector of the target finger according to the finger vein venation map; and the domain division algorithm module is used for analyzing the similarity between the target finger and the finger corresponding to each identification area based on the finger vein feature vector, and the target finger passes the verification if the similarity is matched.
In some embodiments of the invention, the finger vein recognition system establishes the corresponding recognition area in response to a request to add a new user as follows: acquiring a finger vein gray-scale image of a finger of a new user, and preprocessing the finger vein gray-scale image of the finger of the new user by using a preprocessing module to extract a finger vein venation image of the finger; extracting a finger vein feature vector of a finger according to a finger vein venation map of the finger of the new user by using a feature extraction module; and establishing a corresponding identification area for the finger of the new user by using a domain division algorithm module according to the finger vein feature vector of the finger.
In some embodiments of the present invention, the pre-processing module employs a modified U-net network structure in which a downsampled portion of an original U-net network structure is modified into a ResNet convolutional neural network.
According to a third aspect of the invention, an electronic device comprises: one or more processors; and a memory, wherein the memory is to store one or more executable instructions; the one or more processors are configured to implement the steps of the method of the first aspect via execution of the one or more executable instructions.
Compared with the prior art, the invention has the advantages that:
the invention adopts a non-finger vein data set pre-training feature extraction module to solve the problem of insufficient finger vein training data required by a deep learning algorithm; on the basis, a finger vein venation map of a finger is extracted from the finger vein gray scale map to reduce the interference of other factors on the recognition on the finger vein gray scale map, the finger vein venation map is used for carrying out migration training on a feature extraction module pre-trained by a non-finger vein data set to extract finger vein feature vectors of the finger, a domain division algorithm module is trained to establish different recognition areas for different fingers according to the finger vein feature vectors, the similarity of the target finger and the finger corresponding to each recognition area is analyzed according to the finger vein feature vectors, and the target finger passes verification if the similarity is matched. Therefore, the finger vein recognition system with better recognition performance can be obtained quickly and efficiently according to the method provided by the invention, and the finger vein recognition system can realize high-precision recognition of the finger vein.
Drawings
Embodiments of the invention are further described below with reference to the accompanying drawings, in which:
FIG. 1 is a schematic flow diagram of a method for constructing a finger vein recognition system in accordance with an embodiment of the present invention;
FIG. 2 is a schematic gray scale view of a finger vein;
FIG. 3 is a schematic diagram of finger vein context extracted by the pre-processing module according to an embodiment of the invention;
FIG. 4 is an exemplary triplet according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a feature extraction module trained in a method for constructing a finger vein recognition system according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a feature extraction module trained with triplets in a method for constructing a finger vein recognition system according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating a domain-dividing algorithm module in the method for constructing the finger vein recognition system according to the embodiment of the invention;
FIG. 8 is a schematic diagram of the operation of the feature extraction module of the finger vein recognition system according to the embodiment of the present invention;
fig. 9 is a schematic workflow diagram of a finger vein recognition system according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail by embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As mentioned in the background section, the number of samples in the finger vein image data set is not large enough, so that the recognition accuracy of the finger vein recognition system obtained by the direct training of the finger vein image data set is not high. Therefore, the invention adopts a non-finger vein data set pre-training feature extraction module to solve the problem of insufficient finger vein training data required by a deep learning algorithm; on the basis, a finger vein venation map of a finger is extracted from the finger vein gray scale map to reduce the interference of other factors on the recognition on the finger vein gray scale map, the finger vein venation map is used for carrying out migration training on a feature extraction module pre-trained by a non-finger vein data set to extract finger vein feature vectors of the finger, a domain division algorithm module is trained to establish different recognition areas for different fingers according to the finger vein feature vectors, the similarity of the target finger and the finger corresponding to each recognition area is analyzed according to the finger vein feature vectors, and the target finger passes verification if the similarity is matched. Therefore, the finger vein recognition system with better recognition performance can be obtained quickly and efficiently according to the method provided by the invention, and the finger vein recognition system can realize high-precision recognition of the finger vein.
Before describing embodiments of the present invention in detail, some of the terms used therein will be explained as follows:
u-net network architecture: u-net was originally a network with an Auto-encoder-decoder (Auto-encoder) architecture designed for medical image segmentation. The industry regards it as a Fully Connected Network (FCN). It can be divided into two parts: an Encoder (i.e., an Encoder, corresponding to a downsampling portion) and a Decoder (i.e., a Decoder). The operation of the encoder mainly includes convolution (conv) and max-pooling (Maxpool). The operation of the decoder mainly includes up-sampling followed by a convolution (conv) operation.
The invention provides a method for constructing a finger vein recognition system, which comprises the steps A1, A2, A3 and A4 with reference to FIG. 1. For a better understanding of the present invention, each step is described in detail below with reference to specific examples.
Step A1: and constructing an initial finger vein recognition system, wherein the initial finger vein recognition system comprises a preprocessing module, a feature extraction module adopting non-finger vein data set pre-training and a domain division algorithm module.
According to one embodiment of the invention, the preprocessing module is used for preprocessing the finger vein gray-scale map acquired by the sensor to obtain a finger vein venation map so as to better extract the feature vector. The non-finger vein data set pre-trained feature extraction module is a feature extraction module which is trained by other graph data sets (non-finger vein data sets) in advance in the constructed initial finger vein recognition system. Preferably, the non-finger vein dataset comprises at least one of an Imagenet dataset, a CIFAR dataset and a COCO dataset. The domain division algorithm module is used for establishing different identification areas for different fingers. The technical scheme of the embodiment can at least realize the following beneficial technical effects: at present, the training data volume of the finger vein image is insufficient, and when the feature extraction module is trained, the performance of the directly trained feature extraction module is poor. Therefore, the initial finger vein recognition system is constructed by the non-finger vein data set pre-trained feature extraction module, and then the initial finger vein recognition system is subjected to transfer training, so that the performance of the feature extraction module is improved.
Step A2: and acquiring finger vein gray maps corresponding to a plurality of fingers, and preprocessing the finger vein gray maps by a training preprocessing module to extract the finger vein venation map of the fingers.
According to an embodiment of the present invention, the finger vein grayscale images corresponding to a plurality of fingers can be derived from the public finger vein dataset and/or the user-collected finger vein dataset, referring to fig. 2, because the obtained finger vein grayscale images have low definition and are interfered by many impurities and shadows, the effect of direct application to feature extraction is not good, and the finger vein grayscale images are subjected to data preprocessing before feature extraction. In order to extract the finger vein feature vector well, the preprocessing module of the invention can use the original U-net network structure. The U-net network architecture can be divided into two parts: an Encoder (i.e., an Encoder, corresponding to a downsampling portion) and a Decoder (i.e., a Decoder). The operation of the encoder mainly includes convolution (conv) and max-pooling (Maxpool). The operation of the decoder mainly includes up-sampling followed by a convolution (conv) operation. However, in order to better extract the finger vein feature vectors, the inventor carries out various adjustments on the downsampling part of the original U-net network structure to observe the effect of finally extracting the finger vein venation map, and obtains an improved U-net network structure, wherein the downsampling part of the original U-net network structure is modified into a ResNet convolutional neural network. Through verification, compared with the original U-net network structure, the improved U-net network structure has the advantages of better effect of extracting the vein diagram, easier training, higher segmentation precision and smaller data volume.
Preferably, the step a2 includes: steps A21, A22, A23 and A24. Wherein, the substeps of step a2 are as follows:
step A21: randomly obtaining a part of finger vein gray maps corresponding to a plurality of fingers to perform labeling operation to obtain label files in a json format, converting the label files into finger vein venation maps through scripts, and forming venation extraction training data by using the finger vein venation maps as labels of the corresponding finger vein gray maps, wherein the finger vein venation maps are binary images or gray images representing vein blood vessels venation. Labeling can be realized through Labelme software. Namely: randomly selecting partial finger vein images from an original finger vein dataset (corresponding to finger vein gray maps corresponding to a plurality of fingers), labeling the selected finger vein gray maps by using Labelme software to obtain label files in a json format, converting the label files into binary images or gray images (corresponding to the finger vein gray maps, also called mask maps of the finger veins) of only vein veins by using scripts, and taking the binary images or gray images as labels corresponding to the original finger vein images. The random selection can not only ensure the randomness of the data, but also avoid the interference of human factors. In addition, because the labeling operation is more time-consuming than the prediction by using a neural network, only part of the finger vein gray-scale images are randomly selected for labeling operation, and the corresponding finger vein venation images are obtained by using the part of the finger vein gray-scale images and the label training preprocessing module (neural network) thereof according to the finger vein gray-scale image prediction, so that the overall training time efficiency and the recognition efficiency of a subsequently trained finger vein recognition system are improved. In the training data of the preprocessing module, a finger vein gray-scale map is input data, and a corresponding finger vein venation map extracted according to Labelme software is used as a label of the input data or is called as a truth part. The training data of the preprocessing module can be divided into a training set, a verification set and a test set at random, wherein the training set and the verification set are used for training the preprocessing module to perform venation extraction according to the finger vein gray-scale image, and the test set is used for testing the accuracy of the preprocessing module trained by the training set and the verification set.
Preferably, the finger vein gray scale map is an image obtained by performing contrast-limited adaptive histogram equalization on the finger vein original gray scale map acquired by the sensor. This can enhance image contrast to extract a clearer vein-map.
Step A22: training a preprocessing module to extract a finger vein venation map according to the finger vein gray map based on the venation extraction training data. Namely: the partial Vena digitalis gray map selected in the step A21 is used as an input and the corresponding label is used as an output to train the preprocessing module to learn how to extract the Vena digitalis venation map according to the Vena digitalis gray map.
Step A23: and extracting a corresponding finger vein venation map according to the finger vein gray level maps of all the fingers by using a trained preprocessing module. Namely: and inputting the finger vein gray maps of all fingers in the original finger vein data set into a preprocessing module, and outputting corresponding finger vein venation maps. The method is obtained by direct prediction of a trained neural network, is faster than a finger vein venation map obtained by labeling, and divides vein lines in the finger vein gray scale map to obtain the finger vein venation map as input data of a feature extraction module, so that extracted feature vectors can be more accurate.
Step A24: and randomly dividing the extracted finger vein venation map into a training set, a verification set and a test set of the feature extraction module according to a preset proportion. For example, the part of the finger vein choroid map is scaled to a predetermined scale of 7: 2: 1 are randomly partitioned into a training set, a validation set, and a test set for subsequent training of the feature extraction module.
The inventor obtains the improved U-net network structure trained in the mode through experiments, can efficiently and quickly obtain a clear finger vein venation map, enables the feature vector extracted by a subsequent feature extraction module to be more accurately expressed, and can obtain a better recognition effect when being subsequently used for vein recognition.
The technical scheme of the embodiment can at least realize the following beneficial technical effects: the use of the ResNet convolutional neural network for downsampling has the following advantages: the number of network layers is deepened, and the network segmentation precision is improved; more skip connections can be added in the middle of the network, so that the background semantic information of the image can be better combined to perform multi-scale segmentation; ResNet has the advantages of fast convergence and reduction of model data volume; ResNet enables the model to be easier to train, not only can the model be prevented from degrading, but also the gradient can be prevented from disappearing, and Loss is prevented from converging; the finally obtained preprocessing module extracts the vein diagram more clearly and accurately, improves the accuracy of vein extraction, has better performance than the extraction effect obtained by training the original U-net network structure, and effectively reduces the influence on the recognition performance caused by the phenomena of overexposure, insufficient definition, background noise and the like possibly existing in the image sample acquired by the vein equipment. A finger vein venation map extracted from the trained improved U-net network structure according to the finger vein gray scale map shown in fig. 2 is shown in fig. 3.
According to another embodiment of the present invention, the purpose of step a2 is to train the preprocessing module to learn the ability to segment the vein pattern in the finger vein image. An exemplary training operation of the preprocessing module is as follows: t1, constructing a context extraction training data set comprising: randomly selecting a part of finger vein images from an original finger vein data set to carry out labeling operation, labeling the finger vein images by using Labelme software to obtain label files in a json format, converting the label files into binary images only with vein veins by using scripts, and taking the binary images as labels corresponding to the original finger vein images; t2, training the original finger vein image of the training data in the step T1 as the input of the improved U-NET network to obtain a trained preprocessing module; t3, inputting the original finger vein image into a trained preprocessing module for prediction to obtain a finger vein venation map (Mask); and T4, randomly dividing the predicted finger vein venation map into a training set, a verification set and a test set of the feature extraction module, and using the training set, the verification set and the test set in the training process of the subsequent feature extraction module. The technical scheme of the embodiment can at least realize the following beneficial technical effects: in the traditional finger vein identification process, after a finger vein gray image is collected, the finger vein image is preprocessed and venation is extracted by means of filtering, image binarization, thinning and linear tracking, and the traditional finger vein identification process is poor in robustness and time-consuming under the condition of high image noise; the invention adopts a deep learning neural network model (corresponding to an improved U-NET network) to preprocess a finger vein gray level image, firstly uses a finger vein acquisition device to irradiate through near infrared light, utilizes a CCD camera on a finger vein instrument to acquire a personal finger vein gray level image, and adopts the improved U-NET network (corresponding to the improved U-NET neural network model) to extract a finger vein venation from the finger vein gray level image, thereby having high robustness and better extraction effect, and not needing excessive operation on image preprocessing, and greatly shortening the time of the whole identification process.
Step A3: the training feature extraction module extracts finger vein feature vectors of fingers according to the finger vein venation map migration learning, wherein the training data are in a triple form based on the finger vein venation map. And generating training data in a triple form based on the extracted finger vein venation map, and adopting a transfer learning method to iterate a training feature extraction module to extract finger vein feature vectors of the fingers according to the finger vein venation map.
According to one embodiment of the invention, the feature extraction module includes a convolutional layer, a pooling layer, and a fully-connected layer. Convolutional layers use convolutional neural networks ResNet. For example, convolutional layers employ either ResNet50 or ResNet 101. For another example, the feature extraction module is obtained by improving a FaceNet model of google, and modifies an original icep respet v1 convolution network structure of the FaceNet model into a respet 50 convolution network structure. When carrying out the training of feature extraction module, because of finger vein image training data volume is not enough, when training finger vein recognition model through the deep learning theory, need a large amount of finger vein image data set as training data, and this field of finger vein is difficult to gather and obtains a large amount of data, when the data volume is not enough, overfitting phenomenon will appear in the deep learning, and the concrete expression is: the model under the training set data has good effect, and the test set and the actual application have poor effect. Therefore, the invention uses the transfer learning algorithm, selects the non-finger vein data set training feature extraction module for pre-training from the public data set to obtain the pre-training model (namely the aforementioned feature extraction module adopting the non-finger vein data set pre-training), uses some learning frames to transfer and train the trained pre-training model by using the finger vein venation diagram, gradually transfers the parameters of the pre-training model, and finally transfers the parameters of the pre-training model into the high-quality model in the finger vein field. The non-finger vein dataset includes one or more of an Imagenet dataset, a CIFAR dataset, and a COCO dataset. The invention optimizes the feature extraction module by using the transfer learning method, thereby not only exerting the feature extraction capability of deep learning, but also effectively avoiding the problem that the overfitting capability of the deep learning is limited under small data quantity. In training the feature extraction module, the finger vein venation maps in the training set, the verification set and the test set obtained in the step A2 are used. Preferably, the training set is used for training the feature extraction module and determining the weight parameters of the feature extraction module. The validation set is used to determine the network structure of the feature extraction module and to adjust the hyper-parameters of the feature extraction module. The test set is used to verify the generalization ability of the feature extraction module. Step a3 includes: a31 and A32, and the sub-steps are as follows:
step A31: and generating triples by online generating samples based on the extracted finger vein choroid map, wherein each triplet comprises an anchor sample, a positive sample and a negative sample, the positive sample and the anchor sample belong to the same finger, and the negative sample and the anchor sample belong to different fingers. Namely: in training, verifying or testing, the samples in the training set, the verifying set or the testing set are grouped into the form of a triple according to the steps shown in the step a 31. Referring to fig. 4, an exemplary triplet is shown, where the positive and anchor samples are finger vein context maps extracted from different finger vein gray maps of the same finger, with a small difference between them; the negative sample and the anchor sample are finger vein vena cava maps extracted from finger vein gray maps of different fingers, and the difference between the two is large. Preferably, the step a31 includes: randomly selecting an anchor sample, a positive sample and a negative sample in each triplet during first training; and during subsequent training, randomly selecting an anchor sample and a positive sample of each triplet, calculating the similarity between all negative samples of the anchor sample in the triplet and the anchor sample based on the latest finger vein feature vector for any triplet, and taking the negative sample with the highest similarity as the negative sample of the triplet.
Step A32: and under the guidance of the triple loss function, the training feature extraction module outputs the finger vein feature vectors of the anchor sample, the positive sample and the negative sample based on the finger vein vena cava map in the triple, so that the distance between the anchor sample and the positive sample calculated according to the output finger vein feature vectors is smaller than the distance between the anchor sample and the negative sample. The dimension of the finger vein feature vector can be 64-512 dimensions, and even larger range, and the parameter can change the dimension of the output feature vector by modifying the neural network structure. The finger vein venation map is processed by a convolution layer, a pooling layer and a full connection layer of the feature extraction module, and multi-dimensional finger vein feature vectors are output. The convolutional layer is preferably of a Resnet50 convolutional network structure, the fully connected layer is preferably a 128-dimensional fully connected layer, and the dimension of the output finger vein feature vector is 128 dimensions. After testing the recognition accuracy and speed of the finger vein recognition system through experiments, the inventor selects a 128-dimensional optimal implementation scheme in a compromise mode, and the scheme can achieve better recognition accuracy at higher recognition speed. Compared with the traditional artificial feature extraction methods such as image processing, pattern recognition and the like, the feature extraction module has more comprehensive feature extraction capability and has good robustness for different image qualities and environmental influences.
According to an example of the present invention, the main idea of the feature extraction module (which may be called FingerNet model) is to map the finger vein venation map to a multidimensional space, and the similarity of the finger vein venation map is represented by a spatial distance (euclidean distance). The space distance of the finger vein venation map of the same finger is smaller, and the space distance of different finger vein images is larger. Therefore, the identification of the finger vein images can be realized through the space mapping of the finger vein images, the image mapping method based on the deep neural network and the loss function based on the Triplets are adopted in the FingerNet model to train the neural network, the network directly outputs a 128-dimensional vector space, and fig. 5 shows a network structure training flow chart of FingerNet. The three pictures are triads (Triplets) consisting of two pictures of the same finger and one picture of different fingers, namely a finger vein vena map containing finger vein features extracted by an improved U-net network, and a deep convolutional neural network Resnet50, wherein the triads are formed by changing a vein gray scale map into an array through a function of read data, enter a first convolution block (namely a convolution block Conv1) with 64 multiplied by 7 convolution kernels, enter a subsequent convolution block (namely convolution blocks Conv2, Conv3, Conv4 and Conv5) after normalization, are subjected to a series of convolution operations and normalization processing, enter a full-connection layer (namely a 128-dimensional full-connection layer) of 128 neurons after global average pooling, and output a 128-dimensional vector, so that feature representation of the finger vein image is obtained; and finally, under the guidance of a triple Loss function (triple Loss), reversely propagating and updating the parameters of each volume block. The principle is as follows: one of two vectors of the same finger of the three 128-dimensional feature vectors is used as an anchor point (namely, an anchor sample), the other vector is used as a positive example (namely, a positive sample), one vector of different fingers is used as a negative example (namely, a negative sample), and then a triplet loss function can be input to calculate a loss value, so that the parameters of the feature extraction module can be continuously adjusted and optimized through the triplet loss function until the good feature extraction module is trained. In the drawing, a dashed box represents a convolution module, the number outside the dashed box represents the number of convolution modules corresponding to the dashed box, 3, 4, 6, 3 corresponding convolution modules are represented in sequence from top to bottom, and the number inside the parentheses, for example, (a × a × b), represents b a × a convolution kernels.
The end of the feature extraction module uses a triple loss function for direct classification. While conventional loss functions tend to map finger vein images having one type of feature into the same space, triple loss functions attempt to separate finger vein images of one user from finger vein images of other users. The triple is three samples (Anchor sample Anchor, Positive sample Positive and Negative sample Negative), and is determined by using the distance relationship. As shown in fig. 6, the objective of the training is to make the spatial distance between the anchor sample and the positive sample smaller than the spatial distance between the anchor sample and the negative sample in as many triples as possible.
In the invention, the feature extraction module is trained in batch, and one batch (Epoch) refers to that all finger vein veins in the training set are used once. Training is carried out in batches divided into small batches (Mini-Batch, also called Batch in some documents). The number of triples used within each mini-batch is equal to the total number of triples in the training set divided by the number of mini-batches. And updating the parameters of the feature extraction module once after each small batch is finished. During training of each small batch, in order to calculate a triple Loss value (triple Loss), a reasonable triple training feature extraction module needs to be selected. If a brute force approach is used to find out the nearest negative sample and the farthest positive sample from all samples and then perform optimization, the search time is too long and it may also cause difficulty in training convergence due to wrong labels. Therefore, in an online manner of generating the triples, in each Mini-batch, all Anchor sample-positive sample pairs (Anchor-pos pairs) are found when generating the triples, and then the difficult negative samples (Hard neg samples) are found for each Anchor sample-positive sample pair. According to one example of the present invention, the main process of generating triples online includes: k1, sampling finger vein venation maps from the training set at the beginning of each small batch of training (such as the finger vein venation maps of how many fingers are sampled by each batch, and how many finger vein venation maps are sampled by each finger, so that the finger vein venation maps to be sampled are obtained); k2, calculating finger vein feature vectors (Embedding vectors) obtained by the sampled finger vein venation diagram in a feature extraction model, and finding out a triplet of a to-be-started small batch of training formed by the difficult negative samples closest to the anchor sample by calculating Euclidean distances among the finger vein feature vectors; k3, performing small-batch training according to the obtained triples, calculating triple loss values, and reversely propagating the optimized feature extraction module.
Step A4: the training domain division algorithm module establishes different recognition areas for different fingers according to the finger vein feature vectors, and analyzes the similarity of the target finger and the finger corresponding to each recognition area according to the finger vein feature vectors.
According to one embodiment of the present invention, step a4 includes: a41, randomly selecting a preset number of finger vein gray level graphs of each finger of the input person to construct a domain division algorithm model, and using the rest finger vein gray level graphs of each finger of the input person to test the false rejection rate of the constructed domain division algorithm model; a42, acquiring finger vein feature vectors of a preset number corresponding to the preset number of finger vein gray-scale maps of each finger; a43, averaging each dimension according to a preset number of finger vein feature vectors to obtain an average vector, and taking the average vector as the area center point of the identification area of the corresponding finger; a44, analyzing Euclidean distances between a central point of an area and each finger vein feature vector in a preset number of finger vein feature vectors, taking a median value of all Euclidean distances as an area radius of an identification area of the finger, and multiplying the area radius by an adjusting coefficient to be used as an actual radius; a45, constructing a domain division algorithm model according to the area center point, the area radius and the adjusting coefficient of each finger, analyzing the similarity of each target finger corresponding to the rest finger vein gray level images of each finger of the input personnel and the finger corresponding to each identification area based on the finger vein feature vectors, and if the similarity is matched, the target finger passes the verification; a46, testing the false rejection rate of the domain algorithm model according to the rest finger vein gray level images of each finger of the person who has been recorded, testing the false acceptance rate of the domain algorithm model according to the finger vein gray level images of the fingers of the person who has not been recorded, and adjusting the adjusting coefficient according to the false rejection rate and the false acceptance rate to meet the use requirement. Wherein, the false recognition rate refers to the number of the unregistered personnel pictures passing the recognition and divided by the number of all the unregistered personnel pictures participating in the test. The false reject rate refers to the number of recorded people's pictures that fail identification divided by the number of all recorded people's pictures that participated in the test. The technical scheme of the embodiment can at least realize the following beneficial technical effects: the invention uses the domain algorithm module to establish different identification areas for different fingers, is used for comparing finger vein venation maps corresponding to the fingers, and realizes high-precision identification of the finger veins.
According to an example of the present invention, a domain-division algorithm module is constructed by taking finger vein feature vectors of fingers as input, each finger vein feature vector can be regarded as a point in a multidimensional space, and then each finger vein venation map in a training set can find a one-to-one corresponding point in the multidimensional space. Moreover, the positions of the finger vein feature vectors extracted according to different finger vein venation maps of the same finger in the multi-bit space are closer, and the distances of the finger vein feature vectors extracted according to the finger vein venation maps of different fingers in the space are farther, so that different fingers are distinguished. A schematic diagram of the domain-divided algorithm model is shown in FIG. 7. An exemplary process for constructing a domain-divided algorithm model includes: q1, randomly selecting the front n-1 finger vein venation diagrams (n is about 6) of each finger to construct a domain algorithm model, and forming verification data by using the remaining pictures of each finger for subsequent testing of the false rejection rate; q2, converting the finger vein venation map in the step Q1 into a finger vein feature vector with the finger vein image features; q3, solving the average vector of the first n-1 vectors, wherein the geometric meaning of the average vector represents the area central point O of the first n-1 points in the multi-dimensional space, solving the Euclidean distance between the area central point and the first n-1 vectors, taking the median value to represent the area radius R of the area where the finger is located, and multiplying the median value by an adjusting coefficient alpha; q4, constructing a domain algorithm model according to the area center point O, the area radius R and the adjusting coefficient alpha. The adjusting coefficient alpha can be adjusted according to requirements, and the purpose of simultaneously controlling the acceptance rate and the rejection rate is achieved. When the domain division algorithm model is tested, the verification data reserved in the image comparison link is used for detecting the false rejection rate of the domain division algorithm model, and a new data set which does not participate in the modeling of the domain division algorithm is added for testing the false acceptance rate.
According to another aspect of the present invention, there is also provided a finger vein recognition system constructed by the method, including: the preprocessing module is used for preprocessing the finger vein gray level image of the target finger to extract a finger vein venation image of the finger; the characteristic extraction module is used for extracting a finger vein characteristic vector of the target finger according to the finger vein venation map; and the domain division algorithm module is used for analyzing the similarity between the target finger and the finger corresponding to each identification area based on the finger vein feature vector, and the target finger passes the verification if the similarity is matched. Preferably, the preprocessing module adopts a modified U-net network structure, and the modified U-net network structure modifies a downsampling part of an original U-net network structure into a ResNet convolutional neural network. Preferably, the finger vein recognition system establishes the corresponding recognition area in the following manner in response to a request for adding a new user: acquiring a finger vein gray-scale image of a finger of a new user, and preprocessing the finger vein gray-scale image of the finger of the new user by using a preprocessing module to extract a finger vein venation image of the finger; extracting a finger vein feature vector of a finger according to a finger vein venation map of the finger of the new user by using a feature extraction module; and establishing a corresponding identification area for the finger of the new user by using a domain division algorithm module according to the finger vein feature vector of the finger. Referring to fig. 8, when the user identity is identified, the finger vein venation map corresponding to the target finger is input into the feature extraction module, and a multidimensional finger vein feature vector is output through the convolution layer, the pooling layer and the full connection layer of the feature extraction module.
According to an embodiment of the present invention, referring to fig. 9, the work flow of the finger vein recognition system is: m1, acquiring a finger vein image by a sensor to obtain an original gray level image of the finger vein; namely: the LED emits near infrared rays above the finger, and a CCD camera is used for shooting a vein image below the finger to obtain an original grey-scale image of the finger vein; m2, image enhancement and venation extraction, namely: carrying out contrast-limited adaptive histogram equalization on the original finger vein gray level image to obtain a finger vein gray level image so as to enhance the image contrast and make vein lines of the image clearer, and extracting a finger vein venation image from the finger vein gray level image through a preprocessing module (corresponding to an improved U-net network); m3, finger vein feature extraction, namely: obtaining a finger vein feature vector of the finger vein image through a feature extraction module according to the obtained finger vein venation map; m4, feature alignment and identification, namely: and comparing the obtained feature vector with an original finger vein template (corresponding to the finger vein feature vector corresponding to the finger of the person who is recorded) in the database, calculating the similarity, and if the similarity is matched, passing, and if the similarity is not matched, rejecting the similarity.
The invention also provides a finger vein identification method, which comprises the following steps: preprocessing a finger vein gray level image of a target finger to extract a finger vein venation image of the finger; extracting a finger vein feature vector of the target finger according to the finger vein venation diagram; and analyzing the similarity of the target finger and the finger corresponding to each recognition area based on the finger vein feature vector, and if the similarity is matched, the target finger passes the verification. It should be understood that, although the present invention analyzes the similarity of the target finger with the finger corresponding to each recognition area, as long as the similarity of the target finger with the finger corresponding to any one recognition area matches, it is regarded as similarity matching.
It should be noted that, although the steps are described in a specific order, the steps are not necessarily performed in the specific order, and in fact, some of the steps may be performed concurrently or even in a changed order as long as the required functions are achieved.
The present invention may be a system, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that retains and stores instructions for use by an instruction execution device. The computer readable storage medium may include, for example, but is not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
Claims (11)
1. A method for constructing a finger vein recognition system, comprising:
a1, constructing an initial finger vein recognition system, which comprises: the device comprises a preprocessing module, a feature extraction module adopting non-finger vein data set pre-training and a domain division algorithm module;
a2, acquiring finger vein gray maps corresponding to a plurality of fingers, and training a preprocessing module to preprocess the finger vein gray maps to extract finger vein venation maps of the fingers;
a3, extracting a finger vein feature vector of a finger according to the migration learning of a finger vein venation map by a training feature extraction module, wherein training data are in a triple form generated based on the finger vein venation map;
a4, the training domain-division algorithm module establishes different recognition areas for different fingers according to the finger vein feature vectors, and analyzes the similarity of the target finger and the finger corresponding to each recognition area according to the finger vein feature vectors.
2. The method for constructing a finger vein recognition system according to claim 1, wherein the step a2 comprises:
a21, randomly obtaining a part of finger vein gray maps in finger vein gray maps corresponding to a plurality of fingers to perform labeling operation to obtain label files in a json format, converting the label files into finger vein choroid maps through scripts, and forming choroid extraction training data by using the finger vein choroid maps as labels of the corresponding finger vein gray maps, wherein the finger vein choroid maps are binary images or gray images representing vein veins;
a22, training a preprocessing module to extract a finger vein venation map according to the finger vein gray map based on venation extraction training data;
a23, extracting a corresponding finger vein venation map according to the finger vein gray level maps of all fingers by using a trained preprocessing module;
and A24, randomly dividing the extracted finger vein venation map into a training set, a verification set and a test set of the feature extraction module according to a preset proportion.
3. Method for constructing a finger vein recognition system according to any one of claims 1 or 2, wherein said step a3 comprises:
a31, generating triples in an online sample generation mode based on the extracted finger vein choroid map, wherein each triplet comprises an anchor sample, a positive sample and a negative sample, the positive sample and the anchor sample belong to the same finger, and the negative sample and the anchor sample belong to different fingers;
and A32, training the feature extraction module under the guidance of the triple loss function to output finger vein feature vectors of the anchor sample, the positive sample and the negative sample based on the finger vein vena cava in the triple, so that the distance between the anchor sample and the positive sample calculated according to the output finger vein feature vectors is smaller than the distance between the anchor sample and the negative sample.
4. The method for constructing a finger vein recognition system according to claim 3, wherein the step A31 comprises:
randomly selecting an anchor sample, a positive sample and a negative sample in each triplet during first training;
and during subsequent training, randomly selecting an anchor sample and a positive sample of each triplet, calculating the similarity between all negative samples of the anchor sample in the triplet and the anchor sample based on the latest finger vein feature vector for any triplet, and taking the negative sample with the highest similarity as the negative sample of the triplet.
5. The method for constructing a finger vein recognition system according to any one of claims 1 to 4, wherein the dimensions of the finger vein feature vector extracted by the vein feature extraction module range from 64 dimensions to 512 dimensions.
6. Method for constructing a finger vein recognition system according to any one of claims 1 to 4, wherein step A4 comprises:
a41, randomly selecting a preset number of finger vein gray level graphs of each finger of the input person to construct a domain division algorithm model, and using the rest finger vein gray level graphs of each finger of the input person to test the false rejection rate of the constructed domain division algorithm model;
a42, acquiring finger vein feature vectors of a preset number corresponding to the preset number of finger vein gray-scale maps of each finger;
a43, averaging each dimension according to a preset number of finger vein feature vectors to obtain an average vector, and taking the average vector as the area center point of the identification area of the corresponding finger;
a44, analyzing Euclidean distances between a central point of an area and each finger vein feature vector in a preset number of finger vein feature vectors, taking a median value of all Euclidean distances as an area radius of an identification area of the finger, and multiplying the area radius by an adjusting coefficient to be used as an actual radius;
a45, constructing a domain division algorithm model according to the area center point, the area radius and the adjusting coefficient of each finger, analyzing the similarity of each target finger corresponding to the rest finger vein gray level images of each finger of the input personnel and the finger corresponding to each identification area based on the finger vein feature vectors, and if the similarity is matched, the target finger passes the verification;
a46, testing the false rejection rate of the domain algorithm model according to the rest finger vein gray level images of each finger of the person who has been recorded, testing the false acceptance rate of the domain algorithm model according to the finger vein gray level images of the fingers of the person who has not been recorded, and adjusting the adjusting coefficient according to the false rejection rate and the false acceptance rate to meet the use requirement.
7. A finger vein recognition system constructed using the method of any one of claims 1 to 6, comprising:
the preprocessing module is used for preprocessing the finger vein gray level image of the target finger to extract a finger vein venation image of the finger;
the characteristic extraction module is used for extracting a finger vein characteristic vector of the target finger according to the finger vein venation map;
and the domain division algorithm module is used for analyzing the similarity between the target finger and the finger corresponding to each identification area based on the finger vein feature vector, and the target finger passes the verification if the similarity is matched.
8. The finger vein recognition system of claim 7, wherein the finger vein recognition system establishes corresponding recognition zones in response to a request to add a new user as follows:
acquiring a finger vein gray-scale image of a finger of a new user, and preprocessing the finger vein gray-scale image of the finger of the new user by using a preprocessing module to extract a finger vein venation image of the finger;
extracting a finger vein feature vector of a finger according to a finger vein venation map of the finger of the new user by using a feature extraction module;
and establishing a corresponding identification area for the finger of the new user by using a domain division algorithm module according to the finger vein feature vector of the finger.
9. The system of claim 7 or 8, wherein the preprocessing module employs a modified U-net network structure in which a downsampled portion of an original U-net network structure is modified to a ResNet convolutional neural network.
10. A computer-readable storage medium, having embodied thereon a computer program, the computer program being executable by a processor to perform the steps of the method of any one of claims 1 to 6.
11. An electronic device, comprising:
one or more processors; and
a memory, wherein the memory is to store one or more executable instructions;
the one or more processors are configured to implement the steps of the method of any of claims 1-6 via execution of the one or more executable instructions.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011508436.3A CN112560710B (en) | 2020-12-18 | 2020-12-18 | Method for constructing finger vein recognition system and finger vein recognition system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011508436.3A CN112560710B (en) | 2020-12-18 | 2020-12-18 | Method for constructing finger vein recognition system and finger vein recognition system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112560710A true CN112560710A (en) | 2021-03-26 |
CN112560710B CN112560710B (en) | 2024-03-01 |
Family
ID=75031787
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011508436.3A Active CN112560710B (en) | 2020-12-18 | 2020-12-18 | Method for constructing finger vein recognition system and finger vein recognition system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112560710B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113076927A (en) * | 2021-04-25 | 2021-07-06 | 华南理工大学 | Finger vein identification method and system based on multi-source domain migration |
CN113435249A (en) * | 2021-05-18 | 2021-09-24 | 中国地质大学(武汉) | Densenet-based convolutional neural network finger vein identification method |
CN117312976A (en) * | 2023-10-12 | 2023-12-29 | 国家电网有限公司华东分部 | Internet of things equipment fingerprint identification system and method based on small sample learning |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017059591A1 (en) * | 2015-10-10 | 2017-04-13 | 厦门中控生物识别信息技术有限公司 | Finger vein identification method and device |
CN106971174A (en) * | 2017-04-24 | 2017-07-21 | 华南理工大学 | A kind of CNN models, CNN training methods and the vein identification method based on CNN |
CN107358185A (en) * | 2017-07-03 | 2017-11-17 | 上海奥宜电子科技有限公司 | Palm print and palm vein image-recognizing method and device based on topological analysis |
CN107392114A (en) * | 2017-06-29 | 2017-11-24 | 广州智慧城市发展研究院 | A kind of finger vein identification method and system based on neural network model |
CN110008902A (en) * | 2019-04-04 | 2019-07-12 | 山东财经大学 | A kind of finger vein identification method and system merging essential characteristic and deformation characteristics |
CN110263659A (en) * | 2019-05-27 | 2019-09-20 | 南京航空航天大学 | A kind of finger vein identification method and system based on triple loss and lightweight network |
CN110390282A (en) * | 2019-07-12 | 2019-10-29 | 西安格威西联科技有限公司 | A kind of finger vein identification method and system based on the loss of cosine center |
CN110532851A (en) * | 2019-07-04 | 2019-12-03 | 珠海格力电器股份有限公司 | Finger vein identification method and device, computer equipment and storage medium |
CN110717372A (en) * | 2019-08-13 | 2020-01-21 | 平安科技(深圳)有限公司 | Identity verification method and device based on finger vein recognition |
WO2020083407A1 (en) * | 2018-10-23 | 2020-04-30 | 华南理工大学 | Three-dimensional finger vein feature extraction method and matching method therefor |
KR102138660B1 (en) * | 2019-03-18 | 2020-07-28 | 이승진 | Combined authentication system using fingerprint and branched veins |
CN111950406A (en) * | 2020-07-28 | 2020-11-17 | 深圳职业技术学院 | Finger vein identification method, device and storage medium |
-
2020
- 2020-12-18 CN CN202011508436.3A patent/CN112560710B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017059591A1 (en) * | 2015-10-10 | 2017-04-13 | 厦门中控生物识别信息技术有限公司 | Finger vein identification method and device |
CN106971174A (en) * | 2017-04-24 | 2017-07-21 | 华南理工大学 | A kind of CNN models, CNN training methods and the vein identification method based on CNN |
CN107392114A (en) * | 2017-06-29 | 2017-11-24 | 广州智慧城市发展研究院 | A kind of finger vein identification method and system based on neural network model |
CN107358185A (en) * | 2017-07-03 | 2017-11-17 | 上海奥宜电子科技有限公司 | Palm print and palm vein image-recognizing method and device based on topological analysis |
WO2020083407A1 (en) * | 2018-10-23 | 2020-04-30 | 华南理工大学 | Three-dimensional finger vein feature extraction method and matching method therefor |
KR102138660B1 (en) * | 2019-03-18 | 2020-07-28 | 이승진 | Combined authentication system using fingerprint and branched veins |
CN110008902A (en) * | 2019-04-04 | 2019-07-12 | 山东财经大学 | A kind of finger vein identification method and system merging essential characteristic and deformation characteristics |
CN110263659A (en) * | 2019-05-27 | 2019-09-20 | 南京航空航天大学 | A kind of finger vein identification method and system based on triple loss and lightweight network |
CN110532851A (en) * | 2019-07-04 | 2019-12-03 | 珠海格力电器股份有限公司 | Finger vein identification method and device, computer equipment and storage medium |
CN110390282A (en) * | 2019-07-12 | 2019-10-29 | 西安格威西联科技有限公司 | A kind of finger vein identification method and system based on the loss of cosine center |
CN110717372A (en) * | 2019-08-13 | 2020-01-21 | 平安科技(深圳)有限公司 | Identity verification method and device based on finger vein recognition |
CN111950406A (en) * | 2020-07-28 | 2020-11-17 | 深圳职业技术学院 | Finger vein identification method, device and storage medium |
Non-Patent Citations (1)
Title |
---|
陶志勇;冯媛;林森;: "基于迁移学习的指静脉与指关节纹分数级融合的识别研究", 计算机应用与软件, no. 12 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113076927A (en) * | 2021-04-25 | 2021-07-06 | 华南理工大学 | Finger vein identification method and system based on multi-source domain migration |
CN113076927B (en) * | 2021-04-25 | 2023-02-14 | 华南理工大学 | Finger vein identification method and system based on multi-source domain migration |
CN113435249A (en) * | 2021-05-18 | 2021-09-24 | 中国地质大学(武汉) | Densenet-based convolutional neural network finger vein identification method |
CN117312976A (en) * | 2023-10-12 | 2023-12-29 | 国家电网有限公司华东分部 | Internet of things equipment fingerprint identification system and method based on small sample learning |
Also Published As
Publication number | Publication date |
---|---|
CN112560710B (en) | 2024-03-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yuan et al. | Deep residual network with adaptive learning framework for fingerprint liveness detection | |
US11347973B2 (en) | Generative model training and image generation apparatus and method | |
CN106599854B (en) | Automatic facial expression recognition method based on multi-feature fusion | |
Kadam et al. | Detection and localization of multiple image splicing using MobileNet V1 | |
CN111444881A (en) | Fake face video detection method and device | |
CN112560710B (en) | Method for constructing finger vein recognition system and finger vein recognition system | |
CN113989890A (en) | Face expression recognition method based on multi-channel fusion and lightweight neural network | |
WO2020254857A1 (en) | Fast and robust friction ridge impression minutiae extraction using feed-forward convolutional neural network | |
WO2020190480A1 (en) | Classifying an input data set within a data category using multiple data recognition tools | |
CN109145704B (en) | Face portrait recognition method based on face attributes | |
CN112329662B (en) | Multi-view saliency estimation method based on unsupervised learning | |
CN111680755A (en) | Medical image recognition model construction method, medical image recognition device, medical image recognition medium and medical image recognition terminal | |
Ma et al. | Retinal vessel segmentation by deep residual learning with wide activation | |
CN112818774A (en) | Living body detection method and device | |
CN112818915A (en) | Depth counterfeit video detection method and system based on 3DMM soft biological characteristics | |
Sujana et al. | An effective CNN based feature extraction approach for iris recognition system | |
Diarra et al. | Study of deep learning methods for fingerprint recognition | |
Depuru et al. | Hybrid CNNLBP using facial emotion recognition based on deep learning approach | |
Toliupa et al. | Procedure for adapting a neural network to eye iris recognition | |
Omarov et al. | Machine learning based pattern recognition and classification framework development | |
CN112926574A (en) | Image recognition method, image recognition device and image recognition system | |
Tunc et al. | Age group and gender classification using convolutional neural networks with a fuzzy logic-based filter method for noise reduction | |
Sharma et al. | Solving image processing critical problems using machine learning | |
CN114565964B (en) | Emotion recognition model generation method, emotion recognition device, emotion recognition medium and emotion recognition equipment | |
Huang | Multimodal biometrics fusion algorithm using deep reinforcement learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |