CN106991368A - A kind of finger vein checking personal identification method based on depth convolutional neural networks - Google Patents
A kind of finger vein checking personal identification method based on depth convolutional neural networks Download PDFInfo
- Publication number
- CN106991368A CN106991368A CN201710089988.7A CN201710089988A CN106991368A CN 106991368 A CN106991368 A CN 106991368A CN 201710089988 A CN201710089988 A CN 201710089988A CN 106991368 A CN106991368 A CN 106991368A
- Authority
- CN
- China
- Prior art keywords
- finger
- vein
- neural networks
- convolutional neural
- pattern template
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
- G06F18/2193—Validation; Performance evaluation; Active pattern learning techniques based on specific statistical tests
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/32—Normalisation of the pattern dimensions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/14—Vascular patterns
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
Personal identification method is verified the invention discloses a kind of finger vein based on depth convolutional neural networks.This method is:1) gather or choose multiple sample images, that is, refer to vein near-infrared image;Wherein, at least two sample images of each finger correspondence;2) for each near-infrared image, generation one refers to vein pattern template;3) depth convolutional neural networks are based on using each finger vein pattern template training, obtain a pair by same finger and refer to vein pattern Template Maps to similarity based on depth convolutional neural networks;4) using the similarity that the finger vein pattern template to be verified for referring to vein near-infrared image and the finger vein pattern template of each sample image are calculated based on depth convolutional neural networks after training, and judge that both are the finger vein near-infrared image of same finger according to similarity.The present invention substantially increases the accuracy and recognition speed of identification.
Description
Technical field
The invention belongs to the authentication field of information security, and in particular to a kind of based on depth convolutional neural networks
Refer to vein checking personal identification method.
Background technology
Refer to vein checking to receive much concern as a kind of emerging biological characteristic verification method, because there are many advantages to include for it
Security and convenience.The finger vein of collection is difficult to be forged and palm off.Therefore, the checking of vein and wide variety of fingerprint are referred to
Checking is compared to more security.Meanwhile, refer to vein image using non-infringement and non-contacting acquisition mode, be user-friendly.
In recent two decades, there are many famous finger vein verification methods to be suggested.The famous researcher Naoto Miura of Japan
Et al. in 2004 propose based on T1 Repeated Line Tl track feature extraction scheme finger vein verification method.Naoto Miura et al.
Improved again in 2007 using maximum curvature point and refer to vein pattern extraction algorithm.Yellow Benin et al. proposed that one kind was based in 2010
The operator and pattern of the wide line detection are normalized to refer to vein checking new method.At present, the method in yellow Benin et al. is because it is higher
Accuracy rate, is widely used in the finger vein checking system of commercial field.Since 2010, the development of biological characteristic validation technology is fast
Speed.More refer to vein verification method to be suggested.Certain methods, which are concentrated mainly on improvement feature extracting method, includes minutiae point
Spectrum signature represents that the spectrum signature with improved minutiae point is represented.
However, refer to vein verification method still has many challenges in practice.Sixty-four dollar question is still a person
The accuracy rate of part checking.In first finger hand vein recognition contest with Second Committee, the present invention is observed, refers to vein verification algorithm
When evaluated, although the algorithm obtains one than relatively low error rate on the database that laboratory is collected, but the algorithm exists
Applied from the data set that practical application is collected and but cannot get an acceptable error rate.
The content of the invention
It is a kind of based on depth convolutional Neural it is an object of the invention to propose for technical problem present in prior art
The finger vein checking personal identification method of network.
The present invention devises a depth convolutional Neural net for being fitted the training set size and image size that refer to vein pattern
Network, carries out referring to the more preferable effect of vein pattern matching acquisition on depth convolutional neural networks.The present invention is excavated by hardly possible point sample
DCNN Training strategies can improve the accuracy rate of this method and accelerate whole training process.
The technical scheme is that:
A kind of finger vein based on depth convolutional neural networks verifies personal identification method, and its step is:
1) gather or choose multiple sample images, that is, refer to vein near-infrared image;Wherein, each finger correspondence at least two
Sample image;
2) for each near-infrared image, generation one refers to vein pattern template;
3) depth convolutional neural networks are based on using each finger vein pattern template training, obtain one by same finger
Refer to vein pattern Template Maps to similarity based on depth convolutional neural networks a pair;
4) the finger vein that finger vein near-infrared image to be verified is calculated based on depth convolutional neural networks after training is utilized
Feature templates and the similarity of the finger vein pattern template of each sample image, and judge whether same both are according to similarity
The finger vein near-infrared image of finger.
Further, it should be constituted based on depth convolutional neural networks by different types of 26 layers, the 1st layer is input layer, the
26 layers are output layers, and h layers of output is used as h+1 layers of input;Wherein, the 1st, 3 layers are convolutional layer, including 64 filtering
Device, the 6th, 8 layers be convolutional layer, including 128 wave filters, the 11st, 13,15,18,20,22 be convolutional layer, including 256 filtering
Device;5th, 10,17,24 layers be pond layer;2nd, 4,7,9,12,14,16,19,21,23 excitation layers;25th, 26 layers are full connection
Layer.
Further, the convolutional layer is used to carry out input data the convolution spy that linear product summation obtains input data
Levy;The pond layer is used for convolution Feature Dimension Reduction, and convolution feature is divided into several n x n disjoint range, these are used
The maximum feature in region represents the convolution feature after dimensionality reduction;The excitation layer is the linear activation primitive of correction.
Further, refer to vein pattern template by two of same finger and reset to the pixel sizes of 128x 128 respectively;So
First based on depth convolutional neural networks will be input to after the image for the pixel sizes of 128x 128 that it is 2 passages afterwards
Layer.
Further, the linear activation primitive of the correction is ReLU functions, ReLU functions for input x calculate f (x)=
Max (0, x), as x < 0, f (x)=0, as x > 0, f (x)=x.
Further, it is using each finger method of the vein pattern template training based on depth convolutional neural networks:
11) choose N and be used as training set to referring to vein pattern template;Input whole pixel value xs of the N to finger vein pattern template
It is trained with the weight vectors θ based on depth convolutional neural networks to this based on depth convolutional neural networks;
12) weight vectors θ loss function J (θ) penalty values are calculated;
13) according to loss function J (θ) penalty values, according to gradient descent algorithm, one is randomly choosed from training set every time
Individual sample (xi,yi) learnt, calculate each parameter θiCorresponding gradient;Then regularized learning algorithm is removed towards gradient opposite direction
Speed, the minimum point of loss function is reached to update weight vectors θ;Wherein, θiFor weight vectors θ i-th dimension, xiFor one
Individual feature templates pair, yiFor xiTag along sort.
Further, the gradient descent algorithm is:Formula is used firstCalculate momentum vt,
Then according to formula θ:=θ-vtCalculate weight vectors θ;Wherein, α is learning rate, and parameter γ is super for momentum term.
Further, the gradient descent algorithm is:It is first according to formula Counting loss value J (θ);Then calculate gradient and update weight vectors θ;Its
Middle n is sample number, and m is characteristic vector θ dimension.
Further, generating the method for referring to vein pattern template is:Each finger vein near-infrared image is entered first
Row normalized, then extracts making a little on intravenous line using normalizing the wide line detection algorithm from normalized image
To refer to vein pattern template.
Further, making a little on intravenous line is extracted from normalized image using normalizing the wide line detection algorithm
For refer to vein pattern template method be:If referring to vein near-infrared image for F, refer to vein pattern template for V, to background portion in V
The pixel value divided is set to the 0, pixel value of venosomes part and is set to 255;Then to the point of each in F (x0, y0), calculate distance
The point (x0,y0) circle shaped neighborhood region within r pixel, point (x, y) is the point in neighborhood, N(x0,y0)It is circle shaped neighborhood region;Then count
Calculate the every bit V in V(x0,y0)Pixel value, while marking the lines and background in V;Then extract all in the lines
Point, which is used as, refers to vein pattern template;Wherein according to formula
Calculate the every bit V in V(x0,y0)Pixel value, t for set threshold value.
Compared with prior art, the positive effect of the present invention is:
Follow ICB-2016 refers to the evaluation and test agreement that hand vein recognition contest (hereinafter referred to as FVRC2016) is specified to the present invention simultaneously,
Method proposed by the present invention is evaluated and tested by testing.As a result show, the method proposed by the present invention based on DCNN refers to business level
Vein checking system is compared to acquisition better performance.As can be seen here, in the case of training set is large-scale, depth convolutional Neural
Network has obtained effect in vein image matching process is referred to.Invention also demonstrates the DCNN excavated containing in distress point of sample instruction
The performance of method can be improved in terms of accuracy rate and accelerate whole training process by practicing strategy, it is demonstrated experimentally that what difficult point of sample was excavated
Training strategy nearly twice of the speed that made whole training process fast.
Accompanying drawing table explanation
Fig. 1 is flow chart of the method for the present invention.
Fig. 2 is the wide line feature extraction algorithm.
Fig. 3 for the present invention method in DS1, DS2, the error rate (EER) such as i.e. obtained on DS3 test set training sets.
Embodiment
The present invention is described in detail below in conjunction with the accompanying drawings.
A kind of finger vein checking personal identification method flow such as Fig. 1 institutes based on depth convolutional neural networks of the present invention
Show, its step is:
1) gather:Use the near-infrared image for referring to vein image acquisition device acquisition finger vena.
2) register:Each finger vein original image of collection is normalized, the wide line inspection is then normalized
The feature extraction of method of determining and calculating, generates a finger vein pattern template.
3) match:Features described above template and registered people couple are calculated first by the method based on depth convolutional neural networks
The similarity of the feature templates of finger is answered, and judges that both are same finger according to similarity.
Further, the step 1) in, finger finger of the vein original image from Peking University used in the present invention is quiet
Arteries and veins data set, the data set has added up more than 300, the finger intravenous data image about 700 of more than 000 finger, 000.These
Image is collected using in finger venous collection equipment indoors environment, and each finger gathers 5 finger vein images respectively.
The data set is divided into two groups, and referred to as training set and checking collects.Checking collection has 1500 finger vein numbers of 300 fingers therein
According to image.The vein image of remaining finger is used as training set.Three data sets that FVRC2016 contests are used (DS1, DS2,
DS3 it is) that the test set used is tested in evaluation and test.Table 1 lists the relevant information of these three data sets.
Table 1 is DS1, DS2, the relevant information table of DS3 test sets
Further, the step 2) in, according to document B.Huang, Y.Dai, R.Li, D.Tang, and
W.Li.Finger-vein authentication based on wide line detector and pattern
normalization.In Pattern Recognition(ICPR),2010 20th International Conference
- the 1272.IEEE of on, pages 1269,2010. use following register method, and its step is:
21) normalize:This method only realizes pretreatment to the finger vein image of collection, including big to the image modification first
It is small, vein image size will be referred to and reset to original a quarter, i.e., the image size is adjusted to 96x from the pixels of 384x 512
128 pixels.Then sinusoidal enhancing is carried out, finger outline is looked for, calculates the conventional calculation such as regression equation, simple noise reduction and geometric transformation
Method processing.
22) the wide line is detected:The step is optional.This method is carried using wide line detector from normalized image
Being used as a little on intravenous line is taken to refer to vein pattern template its arthmetic statement as follows:
As shown in Figure 2, if referring to vein artwork for F, vein pattern template is referred to for V, F and V here are 8 96x
The bitmap of 128 pixels.Pixel value in the V of definition, is 0 value as background parts, is 255 values as venosomes part.
1st step:To the point of each in F (x0,y0), calculate circle shaped neighborhood region of the distance within r pixel, point by (1) formula
(x, y) is the point in neighborhood, N(x0,y0)It is circle shaped neighborhood region.
2nd step:The V in V is calculated by (2)-(4) formula(x0,y0), while marking the lines and background in V.
Radius r=5, threshold value t=1 and g=41 are set here.
Further, the step 3) in, matching task is a depth convolutional neural networks frame by independent research
Structure, complete, a pair of feature templates are mapped to a similarity by the network.The network is constituted by different types of 26 layers.Table 2
It is shown that the details of these layers, wherein Conv are convolutional layers, for the summation of linear product.ReLU is excitation layer, simultaneously
It is the linear activation primitive of correction.Max Pool are maximum pond layers, for non-linear down-sampled.FC is full articulamentum.1st layer is
Input layer, the 26th layer is output layer, and middle each layer h and layer h+1 is closely linked to, i.e. layer h output is used as the defeated of layer h+1
Enter.
Table 2 is depth convolutional neural networks framework table
Two feature templates of the same finger created in register method are the images of two 1 passages.The present invention is first
Two templates are reset into the pixel sizes of 128x 128 respectively.It is then combined with 128x of the two templates for 2 passages
The image of 128 pixel sizes, and by they be directly sent to learning network first hide convolutional layer.
Further, the learning strategy of the deep neural network of this research and development is:
31) optimize:The present invention deep neural network for thering is monitor mode training to perform matching task.
311) each layer of the network is described as follows:
For the finger vein pattern template of a pixel size of 2x 128x 128, the 1st, 3 convolutional layers define 64 3x3 pictures
The wave filter (Filter) of plain size, the step-length (Stride) of slip is 1, and filling (Pad) is 1.6th, 8 convolutional layers define 128
The individual wave filter.11st, 13,15,18,20,22 convolutional layers define 256, the wave filter.For 64 convolution features, each
The wave filter of 3x3 pixel sizes can obtain the feature templates 64 that each size is 128x128 pixels with feature templates convolution, directly
Image is carried out after the internal characteristicses that convolution operation extracts input picture to the 3rd convolutional layer, obtaining each feature templates size is
128x128 pixels.5th, 10,17,24 pond layers, to convolution Feature Dimension Reduction, convolution feature are divided by the maximum pond method of sampling
For several n x n disjoint range, the convolution feature after dimensionality reduction, filtering used are represented with the maximum feature in these regions
Device size is 2x2 pixels, and the step-length of slip is 2, is filled with 0.64 feature templates of such as the 3rd convolutional layer pass through the 5th pond
After layer is down-sampled, the feature templates that 64 sizes are 64x64 pixels are obtained.Other layer of operation principle is similarly.
2nd, 4,7,9,12,14,16,19,21, the activation primitive that uses of 23 excitation layers be ReLU, ReLU functions are for defeated
Enter x calculate f (x)=max (0, x).As x < 0, f (x)=0, as x > 0, f (x)=x.So as to accelerate convergence.
Feature templates are degraded to single pixel feature templates to carry out sort operation after each step more than.To prevent mould
The 25th layer has used document Srivastava N, Hinton G, Krizhevsky A, et in type over-fitting, training process
al.Dropout:a simple way to prevent neural networks from overfitting[J]
.Journal of Machine Learning Research,2014,15(1):1929-1958. in Dropout technologies with
Prevent the over-fitting of training.The 25th layer is connected by way of connecting entirely with last layer of output layer simultaneously, exports final classification
As a result.As a result closer to 1, represent that this has to feature templates and bigger may be from same finger;As a result closer to 0, represent
This has to feature templates bigger may be from two different fingers.
Last layer (the 26th layer i.e. in table 2) of the deep neural network of the present invention is the classification of Logic Regression Models
Device, using logistic regression algorithm, solves two classification problems.In Logic Regression Models, training set by n tape label character modules
Plate is to constituting:{(x1, y1), (x2, y2),…,(xn, yn), wherein input feature vector template is to xi∈R2×128×128, tag along sort is
yi∈{0,1}.In the training stage, hθ(x) it is the anticipation function that is determined by the framework of table 2, represents that result takes 1 probability.G is
Activation primitive.Parameter θ is the weight vectors of present networks, and vector dimension m is the quantity of parameter.During original state, weight is used as
Parameter θ is by random initializtion.Input value x is the pixel value of feature templates.Given input value x and parameter θ, (5) formula provide y=1
With y=0 probability, their probability and equal to 1.
312) the learning training process of the network:
The first step:According to input value x and parameter θ, the reality output of calculating network, the present invention is by N to the complete of feature templates
Portion pixel value x and parameter θ input network are trained, and each batch trains n sample, and (N is total for the sample for referring to vein image
Number).
P (y=1 | x;θ)=hθ(x)
P (y=0 | x;θ)=1-hθ(x) (5)
Second step:The loss function J (θ) of training pattern parameter θ penalty values are calculated according to (6) formula.
3rd step:According to loss function J (θ) calculated value, according to the stochastic gradient descent algorithm (SGD) shown in (7) formula, often
It is secondary that n sample (x is randomly choosed from training seti, yi) learnt as a batch, calculate each parameter θiCorresponding ladder
Spend (θiFor weight vectors θ i-th dimension), regularized learning algorithm speed α then is removed towards gradient opposite direction, to update training pattern
Parameter θ reaches the minimum point (convergence) of loss function.In original state, it is 64 for 0.01, n to set initial learning rate α.
4th step:After training is finished, in Qualify Phase, input value x and known weight vectors θ to Unknown Label value,
Y value is calculated according to (5) formula.yiRefer to vein pattern template to xiCorresponding 0 or 1 label value.yi=0 represents to refer to vein spy
Template is levied to xiIt is not from same finger.Work as yi=1 represents input value xiIt is to come from same finger.
313) two phenomenons existed for deep neural network in training, solution of the invention is as follows:
Phenomenon 1:SGD can be oscillated around in Local Extremum, so as to cause convergence rate slow.
The present invention increases the gradient descent algorithm of momentum using shown in (8) (9) formula.It is first according to (8) formula and calculates momentum.
Wherein momentum term vtIt is in upper once momentum term vt-1Multiply study speed plus Grad on the basis of one hyper parameter γ of preceding increase
Rate.Momentum term gradually increases in gradient pointing direction identical direction, and pointing to the direction changed to gradient is gradually reduced, so that
The vibration of faster convergence rate and decrease is arrived.Then according to (9) formula calculating parameter θ.
θ:=θ-vt (9)
Wherein initial learning rate α is 0.01.Momentum term hyper parameter γ is 0.9.
Phenomenon 2:With the progress of model training, the feature templates quantity of model can increase, so that the complexity of model increases
Plus, now training error of the model on training set can be gradually reduced, but when the complexity of model reaches to a certain degree, mould
Error of the type on checking collection is on the contrary as the complexity of model increases and increases.It now just there occurs over-fitting.
The present invention uses the gradient descent algorithm of increase L2 regular terms.(10) formula is the increase in the loss function of L2 norms,
Wherein L2 weights decay λ takes 0.0005.(11) formula is the gradient descent algorithm of regularization.It is first according to (10) formula counting loss.
Then calculate gradient according to (11) formula and update characteristic vector θ.Wherein n is the sample number as a batch training, and m is feature
Vectorial θ dimension.θjRefer to θ jth dimension.
32) difficult point of sample is excavated:In order to improve the performance for the network for performing matching task, in the study of deep neural network
Training stage, the present invention is using difficult point of sample Mining Strategy.256 randomly selected templates pair of each grey iterative generation one group, should
Template includes 128 negative pair and 128 just right, after being propagated forward by network, calculates their penalty values.Wherein only retain 32
Individual be most difficult to classification negative pair and 32 is most difficult to the just right of classification, and these are most difficult to the template of classification to updating power by backpropagation
Value matrix.
The result of specific evaluation and test experiment
In order to evaluate and test the justice and reliable, the data set of the invention using FVRC2016 contests of experiment.The data set amount is big,
It is to be arrived under various circumstances by the checking system acquisition of several business levels that it, which refers to vein image,.All images are all 8 BMP
Form, 256 grades of GTGs and the pixel resolutions of 384x 512.Three data sets (DS1, DS2, DS3) that FVRC2016 contests are used
It is the test set that present invention evaluation and test experiment is used.
Table 3 lists the method used in evaluation and test experiment.Wherein " N+DCNN ", " WLD+DCNN-HM " and " N+DCNN-HM " is this
The method of invention, the baseline that " N+RTM " and " WLD+RTM " is compared as the present invention.
Table 3 is the explanation for the method evaluated and tested
Method listed by preceding four row of table 4 comes from document Y.Ye, L.Ni, H.Zheng, S.Liu, Y.Zhu, D.Zhang,
W.Xiang,and W.Li.Fvrc2016:The 2ndfinger vein recognition competition.In
- the 6.IEEE of Biometrics (ICB), 2016International Conference on, pages 1,2016. descriptions
FVRC2016 contests.As seen from Table 4, the top performance on all test sets is reached compared with other method based on DCNN methods.
EER of the method " N+DCNN-HM " of the present invention on DS1, DS2 test set is respectively 0.42% and 1.41%, with other method
Compared to reaching best effects.EER of " WLD+DCNN-HM " method of the present invention on DS3 test sets is 2.13%, with its other party
Method, which is compared, reaches best effects.
Error rate (EER) table such as i.e. that table 4 obtains for the method for evaluation and test on DS1, DS2 and DS3 data set
The result of table 5 shows, the execution time for performing time and matching of the volume of method of the invention is in acceptable model
In enclosing.
Table 5 is the implementation schedule for the method evaluated and tested
Table 6 is the comparing result of each evaluating method, and wherein bold numerals are the best results between two methods
(a)“N+RTM”vs“WLD+RTM”
(b)“WLD+RTM”vs“WLD+DCNN-HM”
(c)“N+DCNN”vs“N+DCNN-HM”
(d)“WLD+DCNN-HM”vs“N+DCNN-HM”
Table 6a comparing result shows, in all test sets, and " WLD+RTM " method is better than " N+RTM " method.From
In it is visible refer to vein checking in wide line detector have the ability useful information is extracted from original image.
Table 6b comparing result shows, in all test sets, and " WLD+DCNN-HM " method of the invention is better than
" WLD+RTM " method.It is therefrom visible, the template matching method for the robust that the matching process based on DCNN is commonly used with business system
(RTM) it is more preferable compared to effect.
Table 6c comparing result shows, in all test sets, and " N+DCNN-HM " method of the invention is better than " N+
DCNN " methods.Therefrom visible, the Training strategy that hardly possible point sample is excavated is effective.The result of table 7 shows that hardly possible divides sample
The Training strategy of excavation nearly twice of the speed that made whole training process fast.
Table 6d comparing result shows, in all test sets, the accuracy rate of " WLD+DCNN-HM " method of the invention
With " N+DCNN-HM " method closely.Therefrom visible, the matching process based on DCNN is carried independent of the feature of registration phase
Take.
Table 7 is the training time based on DCNN methods
Other experiments that the present invention is performed.The scale for carrying out research and training data using the training set of different scales size is big
The small accuracy rate how influenceed on test set.In order to train, 100,000,200,000 and 300 are randomly choosed from training set,
000 finger classification.Verify the scale of collection as the scale evaluated and tested before." N+DCNN-HM " method of the present invention is used
Learning training is carried out on above-mentioned training set, is then verified on DS1, DS2 and DS3 test set, and calculates its error rate such as i.e.
(EER).Fig. 3 is experimental result.
As seen from Figure 3, when the increase of training set scale, " N+DCNN+HM " method of the invention is in 100,000,200,000 Hes
The model trained on the training set of 300,000 finger classification scales, the accuracy rate tested on DS1, DS2 and DS3 test set
Also lifted.When classification number is more than 200,000, the accuracy rate tested on DS1, DS2 and DS3 test set tends towards stability.
Evaluation result shows, the rate of accuracy reached of the inventive method to the current highest level in the world.
Claims (10)
1. a kind of finger vein based on depth convolutional neural networks verifies personal identification method, its step is:
1) gather or choose multiple sample images, that is, refer to vein near-infrared image;Wherein, at least two samples of each finger correspondence
Image;
2) for each near-infrared image, generation one refers to vein pattern template;
3) depth convolutional neural networks are based on using each finger vein pattern template training, obtain one by the one of same finger
To referring to vein pattern Template Map to similarity based on depth convolutional neural networks;
4) the finger vein pattern that finger vein near-infrared image to be verified is calculated based on depth convolutional neural networks after training is utilized
The similarity of template and the finger vein pattern template of each sample image, and judge that both are same finger according to similarity
Finger vein near-infrared image.
2. the method as described in claim 1, it is characterised in that should be based on depth convolutional neural networks by different types of 26 layers
Composition, the 1st layer is input layer, and the 26th layer is output layer, and h layers of output is used as h+1 layers of input;Wherein, the 1st, 3 layers are
Convolutional layer, including 64 wave filters, the 6th, 8 layers be convolutional layer, including 128 wave filters, the 11st, 13,15,18,20,22 be volume
Lamination, including 256 wave filters;5th, 10,17,24 layers be pond layer;2nd, 4,7,9,12,14,16,19,21,23 excitations
Layer;25th, 26 layers are full articulamentum.
3. method as claimed in claim 2, it is characterised in that the convolutional layer is used to seek the linear product of input data progress
With the convolution feature for obtaining input data;The pond layer is used for convolution Feature Dimension Reduction, and convolution feature is divided into several nxn
Disjoint range, the convolution feature after dimensionality reduction is represented with the maximum feature in these regions;The excitation layer is linear for correction
Activation primitive.
4. method as claimed in claim 2, it is characterised in that refer to vein pattern template by two of same finger and reset respectively
For 128x128 pixel sizes;Then will its be 2 passage 128x128 pixel sizes image after be input to should be based on depth
Spend the first layer of convolutional neural networks.
5. method as claimed in claim 3, it is characterised in that the linear activation primitive of correction is ReLU functions, ReLU letters
Number for input x calculate f (x)=max (0, x), as x < 0, f (x)=0, as x > 0, f (x)=x.
6. method as claimed in claim 2, it is characterised in that rolled up using each finger vein pattern template training based on depth
Product neutral net method be:
11) choose N and be used as training set to referring to vein pattern template;Input whole pixel value xs and base of the N to finger vein pattern template
It is trained in the weight vectors θ of depth convolutional neural networks to this based on depth convolutional neural networks;
12) weight vectors θ loss function J (θ) penalty values are calculated;
13) according to loss function J (θ) penalty values, according to gradient descent algorithm, a sample is randomly choosed from training set every time
ThisLearnt, calculate each parameter θiCorresponding gradient;Then regularized learning algorithm speed is gone towards gradient opposite direction,
The minimum point of loss function is reached to update weight vectors θ;Wherein, θiFor weight vectors θ i-th dimension,For a spy
Template pair is levied,ForTag along sort.
7. method as claimed in claim 6, it is characterised in that the gradient descent algorithm is:Formula is used firstCalculate momentum vt, then according to formula θ:=θ-vtCalculate weight vectors θ;Wherein, α is study speed
Rate, parameter γ is super for momentum term.
8. method as claimed in claim 6, it is characterised in that the gradient descent algorithm is:It is first according to formulaCounting loss value J (θ);So
Gradient is calculated afterwards and updates weight vectors θ;Wherein n is sample number, and m is characteristic vector θ dimension.
9. the method as described in claim 1, it is characterised in that generating the method for referring to vein pattern template is:It is right first
Each refer to vein near-infrared image to be normalized, then using normalizing the wide line detection algorithm from normalized image
Extract being used as a little on intravenous line and refer to vein pattern template.
10. method as claimed in claim 9, it is characterised in that using normalizing the wide line detection algorithm from normalized image
On middle extraction intravenous line is as the method for referring to vein pattern template a little:If referring to vein near-infrared image for F, refer to vein
Feature templates are V, and the pixel value that 0, venosomes part is set to the pixel values of background parts in V is set to 255;Then in F
Each point (x0,y0), calculate apart from the point (x0,y0) circle shaped neighborhood region within r pixel, point (x, y) is in neighborhood
Point, N(x0,y0)It is circle shaped neighborhood region;Then the every bit V in V is calculated(x0,y0)Pixel value, while marking the lines and the back of the body in V
Scape;Then extract being used as a little in the lines and refer to vein pattern template;Wherein according to formula
Calculate the every bit V in V(x0,y0)Pixel value, t for set threshold value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710089988.7A CN106991368A (en) | 2017-02-20 | 2017-02-20 | A kind of finger vein checking personal identification method based on depth convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710089988.7A CN106991368A (en) | 2017-02-20 | 2017-02-20 | A kind of finger vein checking personal identification method based on depth convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106991368A true CN106991368A (en) | 2017-07-28 |
Family
ID=59413802
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710089988.7A Pending CN106991368A (en) | 2017-02-20 | 2017-02-20 | A kind of finger vein checking personal identification method based on depth convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106991368A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107563294A (en) * | 2017-08-03 | 2018-01-09 | 广州智慧城市发展研究院 | A kind of finger vena characteristic extracting method and system based on self study |
CN107682216A (en) * | 2017-09-01 | 2018-02-09 | 南京南瑞集团公司 | A kind of network traffics protocol recognition method based on deep learning |
CN107832684A (en) * | 2017-10-26 | 2018-03-23 | 通华科技(大连)有限公司 | A kind of intelligent vein authentication method and system with independent learning ability |
CN107977609A (en) * | 2017-11-20 | 2018-05-01 | 华南理工大学 | A kind of finger vein identity verification method based on CNN |
CN108388905A (en) * | 2018-03-21 | 2018-08-10 | 合肥工业大学 | A kind of Illuminant estimation method based on convolutional neural networks and neighbourhood context |
CN108876754A (en) * | 2018-05-31 | 2018-11-23 | 深圳市唯特视科技有限公司 | A kind of remote sensing images missing data method for reconstructing based on depth convolutional neural networks |
CN108875705A (en) * | 2018-07-12 | 2018-11-23 | 广州麦仑信息科技有限公司 | A kind of vena metacarpea feature extracting method based on Capsule |
CN109815869A (en) * | 2019-01-16 | 2019-05-28 | 浙江理工大学 | A kind of finger vein identification method based on the full convolutional network of FCN |
CN109993257A (en) * | 2019-04-10 | 2019-07-09 | 黑龙江大学 | A kind of two dimensional code based on vein pattern |
CN110046588A (en) * | 2019-04-22 | 2019-07-23 | 吉林大学 | It is a kind of with steal attack coping mechanism heterogeneous iris recognition method |
CN110147732A (en) * | 2019-04-16 | 2019-08-20 | 平安科技(深圳)有限公司 | Refer to vein identification method, device, computer equipment and storage medium |
CN110263726A (en) * | 2019-06-24 | 2019-09-20 | 山东浪潮人工智能研究院有限公司 | A kind of finger vein identification method and device based on depth correlation feature learning |
CN110543822A (en) * | 2019-07-29 | 2019-12-06 | 浙江理工大学 | finger vein identification method based on convolutional neural network and supervised discrete hash algorithm |
CN111008550A (en) * | 2019-09-06 | 2020-04-14 | 上海芯灵科技有限公司 | Identification method for finger vein authentication identity based on Multiple loss function |
CN111209850A (en) * | 2020-01-04 | 2020-05-29 | 圣点世纪科技股份有限公司 | Method for generating applicable multi-device identification finger vein image based on improved cGAN network |
CN111967351A (en) * | 2020-07-31 | 2020-11-20 | 华南理工大学 | Deep tree network-based finger vein authentication algorithm, device, medium and equipment |
US10964004B2 (en) | 2017-12-25 | 2021-03-30 | Utechzone Co., Ltd. | Automated optical inspection method using deep learning and apparatus, computer program for performing the method, computer-readable storage medium storing the computer program, and deep learning system thereof |
CN113538359A (en) * | 2021-07-12 | 2021-10-22 | 北京曙光易通技术有限公司 | System and method for finger vein image segmentation |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090214474A1 (en) * | 2006-11-01 | 2009-08-27 | Barbara Brooke Jennings | Compounds, methods, and treatments for abnormal signaling pathways for prenatal and postnatal development |
CN101539995A (en) * | 2009-04-24 | 2009-09-23 | 清华大学深圳研究生院 | Imaging device based on vein pattern and backside pattern of finger and multimode identity authentication method |
-
2017
- 2017-02-20 CN CN201710089988.7A patent/CN106991368A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090214474A1 (en) * | 2006-11-01 | 2009-08-27 | Barbara Brooke Jennings | Compounds, methods, and treatments for abnormal signaling pathways for prenatal and postnatal development |
CN101539995A (en) * | 2009-04-24 | 2009-09-23 | 清华大学深圳研究生院 | Imaging device based on vein pattern and backside pattern of finger and multimode identity authentication method |
Non-Patent Citations (4)
Title |
---|
李巧玲等: ""基于卷积神经网络的图像生成方式分类方法"", 《网络与信息安全学报》 * |
李青: ""线状目标提取技术研究及应用"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
楚敏南: ""基于卷积神经网络的图像分类技术研究"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
王军: ""手部静脉识别关键技术研究"", 《中国博士学位论文全文数据库 信息科技辑》 * |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107563294A (en) * | 2017-08-03 | 2018-01-09 | 广州智慧城市发展研究院 | A kind of finger vena characteristic extracting method and system based on self study |
CN107682216B (en) * | 2017-09-01 | 2018-06-05 | 南京南瑞集团公司 | A kind of network traffics protocol recognition method based on deep learning |
CN107682216A (en) * | 2017-09-01 | 2018-02-09 | 南京南瑞集团公司 | A kind of network traffics protocol recognition method based on deep learning |
CN107832684A (en) * | 2017-10-26 | 2018-03-23 | 通华科技(大连)有限公司 | A kind of intelligent vein authentication method and system with independent learning ability |
CN107832684B (en) * | 2017-10-26 | 2021-08-03 | 通华科技(大连)有限公司 | Intelligent vein authentication method and system with autonomous learning capability |
CN107977609A (en) * | 2017-11-20 | 2018-05-01 | 华南理工大学 | A kind of finger vein identity verification method based on CNN |
CN107977609B (en) * | 2017-11-20 | 2021-07-20 | 华南理工大学 | Finger vein identity authentication method based on CNN |
US10964004B2 (en) | 2017-12-25 | 2021-03-30 | Utechzone Co., Ltd. | Automated optical inspection method using deep learning and apparatus, computer program for performing the method, computer-readable storage medium storing the computer program, and deep learning system thereof |
CN108388905A (en) * | 2018-03-21 | 2018-08-10 | 合肥工业大学 | A kind of Illuminant estimation method based on convolutional neural networks and neighbourhood context |
CN108876754A (en) * | 2018-05-31 | 2018-11-23 | 深圳市唯特视科技有限公司 | A kind of remote sensing images missing data method for reconstructing based on depth convolutional neural networks |
CN108875705B (en) * | 2018-07-12 | 2021-08-31 | 广州麦仑信息科技有限公司 | Capsule-based palm vein feature extraction method |
CN108875705A (en) * | 2018-07-12 | 2018-11-23 | 广州麦仑信息科技有限公司 | A kind of vena metacarpea feature extracting method based on Capsule |
CN109815869A (en) * | 2019-01-16 | 2019-05-28 | 浙江理工大学 | A kind of finger vein identification method based on the full convolutional network of FCN |
CN109993257A (en) * | 2019-04-10 | 2019-07-09 | 黑龙江大学 | A kind of two dimensional code based on vein pattern |
CN110147732A (en) * | 2019-04-16 | 2019-08-20 | 平安科技(深圳)有限公司 | Refer to vein identification method, device, computer equipment and storage medium |
CN110046588B (en) * | 2019-04-22 | 2019-11-01 | 吉林大学 | It is a kind of with steal attack coping mechanism heterogeneous iris recognition method |
CN110046588A (en) * | 2019-04-22 | 2019-07-23 | 吉林大学 | It is a kind of with steal attack coping mechanism heterogeneous iris recognition method |
CN110263726A (en) * | 2019-06-24 | 2019-09-20 | 山东浪潮人工智能研究院有限公司 | A kind of finger vein identification method and device based on depth correlation feature learning |
CN110263726B (en) * | 2019-06-24 | 2021-02-02 | 浪潮集团有限公司 | Finger vein identification method and device based on deep correlation feature learning |
CN110543822A (en) * | 2019-07-29 | 2019-12-06 | 浙江理工大学 | finger vein identification method based on convolutional neural network and supervised discrete hash algorithm |
CN111008550A (en) * | 2019-09-06 | 2020-04-14 | 上海芯灵科技有限公司 | Identification method for finger vein authentication identity based on Multiple loss function |
CN111209850A (en) * | 2020-01-04 | 2020-05-29 | 圣点世纪科技股份有限公司 | Method for generating applicable multi-device identification finger vein image based on improved cGAN network |
CN111967351A (en) * | 2020-07-31 | 2020-11-20 | 华南理工大学 | Deep tree network-based finger vein authentication algorithm, device, medium and equipment |
CN111967351B (en) * | 2020-07-31 | 2023-06-20 | 华南理工大学 | Finger vein authentication algorithm, device, medium and equipment based on depth tree network |
CN113538359A (en) * | 2021-07-12 | 2021-10-22 | 北京曙光易通技术有限公司 | System and method for finger vein image segmentation |
CN113538359B (en) * | 2021-07-12 | 2024-03-01 | 北京曙光易通技术有限公司 | System and method for finger vein image segmentation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106991368A (en) | A kind of finger vein checking personal identification method based on depth convolutional neural networks | |
CN106326886B (en) | Finger vein image quality appraisal procedure based on convolutional neural networks | |
CN106548159A (en) | Reticulate pattern facial image recognition method and device based on full convolutional neural networks | |
CN106372581A (en) | Method for constructing and training human face identification feature extraction network | |
CN105205453B (en) | Human eye detection and localization method based on depth self-encoding encoder | |
CN106295555A (en) | A kind of detection method of vital fingerprint image | |
CN105512680A (en) | Multi-view SAR image target recognition method based on depth neural network | |
CN107122375A (en) | The recognition methods of image subject based on characteristics of image | |
CN107220277A (en) | Image retrieval algorithm based on cartographical sketching | |
CN107945153A (en) | A kind of road surface crack detection method based on deep learning | |
CN106446942A (en) | Crop disease identification method based on incremental learning | |
CN103984936A (en) | Multi-sensor multi-feature fusion recognition method for three-dimensional dynamic target recognition | |
CN107832684A (en) | A kind of intelligent vein authentication method and system with independent learning ability | |
CN107609399A (en) | Malicious code mutation detection method based on NIN neutral nets | |
CN106127164A (en) | The pedestrian detection method with convolutional neural networks and device is detected based on significance | |
CN108665005A (en) | A method of it is improved based on CNN image recognition performances using DCGAN | |
CN104298974A (en) | Human body behavior recognition method based on depth video sequence | |
CN111611851B (en) | Model generation method, iris detection method and device | |
Srihari et al. | Role of automation in the examination of handwritten items | |
CN109344713A (en) | A kind of face identification method of attitude robust | |
CN107392114A (en) | A kind of finger vein identification method and system based on neural network model | |
CN106022287A (en) | Over-age face verification method based on deep learning and dictionary representation | |
CN101303730A (en) | Integrated system for recognizing human face based on categorizer and method thereof | |
CN107895144A (en) | A kind of finger vein image anti-counterfeiting discrimination method and device | |
CN108681689B (en) | Frame rate enhanced gait recognition method and device based on generation of confrontation network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170728 |