CN110427832A - A kind of small data set finger vein identification method neural network based - Google Patents
A kind of small data set finger vein identification method neural network based Download PDFInfo
- Publication number
- CN110427832A CN110427832A CN201910615984.7A CN201910615984A CN110427832A CN 110427832 A CN110427832 A CN 110427832A CN 201910615984 A CN201910615984 A CN 201910615984A CN 110427832 A CN110427832 A CN 110427832A
- Authority
- CN
- China
- Prior art keywords
- image
- finger
- layer
- neural network
- identified
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/24—Aligning, centring, orientation detection or correction of the image
- G06V10/242—Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/14—Vascular patterns
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of small data set finger vein identification methods neural network based, comprising: the building and pretreatment of data set;Data extending is carried out to all categories image using data enhancing algorithm;The model for being suitble to data is constructed using convolutional neural networks;The model of design is trained;Target image is obtained using acquisition equipment to be pre-processed, and pretreatment image is exported to the output for carrying out finger vein features vector by model;It is compared with the finger vein features vector template of storage, determines the classification of input finger vena.The present invention is combined by neural network and data enhancing, extracts the feature with robustness;And compared by exporting the Euclidean distance between feature vector to different images, according to the finger venous image different classes of apart from size discrimination, finger vena identification is made to obtain high-accuracy under the premise of small data set.
Description
Technical field
The present invention relates to the technical fields of image steganalysis, refer in particular to a kind of small data set hand neural network based
Refer to vein identification method.
Background technique
Finger vena identification is a kind of biological identification technology that the finger vein features information based on people carries out identification.
With face, the biotechnologys such as fingerprint are compared, and significant advantage is that finger vena knows another characteristic in finger vena blood vessel
The hemoglobin of fluid flow blood has crypticity and living body;Collection process be it is contactless, ensure that collection process safety
Health.Because of public safety, authentication, the fields such as digital entertainment industry promote safety and antifalsification demand, and finger is quiet
Attention of the arteries and veins identification technology increasingly by academia and industry.The application scenarios of finger vena identification are very extensive, such as weight
Want entrance and exit of the passage, mansion access control system, the Sign-On authentication etc. of the systems such as bank.
It is realized there are many kinds of finger vein identification methods at present, one of which is the finger vena identification based on machine learning
Method.By extracting feature to finger vena, redesign classifier is identified, but is illuminated by the light unevenness, the finger vena side of putting
The influence of formula etc., the feature robustness extracted are poor.Another kind is based on general deep learning method.By obtaining big spirogram
It as data, training deep neural network model, then is identified, accuracy rate height and strong robustness, but needs mass data, surpassed
Long training time and powerful hardware condition, specific application scene are unable to satisfy.The above limitation makes current finger
Vein is poor based on the recognition methods robustness that traditional images processing feature is extracted, and more difficult quiet using deep learning progress finger
The identification of arteries and veins.
In summary it discusses, invents a kind of recognition methods of finger vena neural network based under the premise of small data set
Practical application value with higher.
Summary of the invention
The purpose of the present invention is to overcome the shortcomings of the existing technology and deficiency, proposes a kind of decimal neural network based
According to collection finger vein identification method, data enhancement methods expanding data is mainly utilized, redesigns the depth nerve of matched data collection
Network model, and network training is carried out using triple loss, obtain finger vena identification under the premise of small data set high
Accuracy rate.
To achieve the above object, a kind of technical solution provided by the present invention are as follows: small data set hand neural network based
Refer to vein identification method, comprising the following steps:
1) original finger venous image data are collected by finger vein image acquisition equipment, to original finger venous image
Area-of-interest (ROI) extraction is pre-processed and carried out, the original trained number of the area-of-interest picture construction extracted is utilized
According to collection;
2) the region of interest area image concentrated to original training data carries out data expansion to image using data enhancement methods
It fills, increases training sample data amount, construct training dataset;
3) according to training dataset feature, required neural network model is constructed;
4) neural metwork training is carried out to the model specification training parameter of design, and saves trained neural network model ginseng
Number;
5) finger vena original image to be identified is obtained using finger vein image acquisition equipment, extracts finger to be identified
Then region of interest area image to be identified is inputted the mind kept by the finger vena region of interest area image of vein original image
In network model, the corresponding feature vector output of finger venous image to be identified is obtained;
6) template in the feature vector and database that neural network is extracted is inputted using finger venous image to be identified
The corresponding feature vector of image carries out the Distance Judgment between vector, if the corresponding neural network of region of interest area image extracted
The distance between the feature vector of output feature vector corresponding with template image is less than given threshold, then judges the hand to be identified
Refer to that the template finger venous image classification stored in vein image and database is identical, i.e. the same root finger of the same person, if
Distance is greater than given threshold between two vectors, then judges the template hand stored in finger venous image to be identified and database
Refer to that vein image classification is not identical.
In step 1), the finger venous image number that is taken in actual scene using finger vein image acquisition equipment
That quantity is small and unbalanced according to the characteristics of, i.e. raw data set, finger venous image collection, total number of images amount is few, i.e., everyone
Every finger venous image only acquires 3 to 6;Finger horizontal edge is extracted to original image using image processing techniques and to depositing
Finger-image rotation carry out rotation correction;Then the up-and-down boundary of area-of-interest, root are determined by extracting horizontal edge
Determine that the right boundary of finger area-of-interest carries out finger region of interesting extraction according to the boundary of acquisition frame, with area-of-interest
Center be cut image center cut out the identical target finger vena area-of-interest of size, with guarantee obtain image
The requirement for meeting finger vena identification, that is, remove extra background.
In step 2), data extending is carried out to all categories image using data enhancement methods, comprising the following steps:
2.1) random image rotates
Random-Rotation in small angle range is carried out to the area-of-interest of finger vena, the angle of rotation is in -5 degree to 5
It is randomly selected between degree;
2.2) random image adjustment of color
Brightness, contrast and the saturation degree of random adjustment image;
2.3) noise jamming is added at random
The Gaussian noise of (0,1) N is distributed as image addition;
2.4) image size normalization
Same size is converted the image into using bilinear interpolation method.
In step 3), the neural network model for being suitble to data is constructed using convolutional neural networks, comprising the following steps:
3.1) combined data chooses building model
Model is mainly by multiple combination convolution residual errors, convolutional layer, pond layer and random deactivating layer these neural network modules
Composition;
Input picture inputs depth residual error neural network in progress step 2) afterwards;
First layer is convolutional layer;The second layer is convolutional layer;Third layer is a combination convolution residual error modules A, it is rolled up by 7
Lamination composition;4th layer reduces modules A for a dimension, it is made of 4 convolutional layers and 1 maximum pond layer;Layer 5 is
One combination convolution residual error module B, it is made of 5 convolutional layers;Layer 6 is that a dimension reduces module B, it is by 7 convolution
Layer and 1 maximum pond layer composition;Layer 7 is a combination convolution residual error module C, it is made of 5 convolutional layers;8th layer
For a combination convolution residual error module C, it is made of 5 convolutional layers;9th layer is a convolutional layer;Tenth layer is a convolution
Layer;Eleventh floor is an average pond layer;Floor 12 is a developer layer;13rd layer is a random deactivating layer;It is public
Formula is as follows:
r(l)~Bernoulli (p)
Y~(l)=r(l)·y(l)
Wherein, r(l)It obeys the Bernoulli Jacob that probability is p to be distributed, y(l)For l layers of activation value, y~(l)For l layers of output
Value, wi (l+1)For l+1 layers of network weight, bi (l+1)For l+1 layers of biasing, zi (l+1)It is inputted for l+1 layers of hidden layers, f
() is activation primitive;
14th layer normalizes layer for a L2 norm, exports final feature vector;
3.2) loss function is set
Setting loss function is triple loss function, is lost by triple, and network can learn to image to European
Similarity measurements figureofmerit between vector in space;Target is will to refer to that vein image is mapped to Euclidean distance space, using away from
From the standard as classification, distance is small between the same category, it is different classes of between distance it is big, triple loss function formula is such as
Under:
Wherein, LtripletTriple loss is represented, wherein symbol α represents the theorem in Euclid space that anchor point corresponds to neural network output
Vector, symbol p represent the theorem in Euclid space vector of the corresponding neural network output of identical with anchor point classification sample, symbol n representative and
The different sample of anchor point classification corresponds to the theorem in Euclid space vector of neural network output, between margin is represented between positive and negative class sample
Every m represents once trained number of samples.
In step 4), model is trained, comprising the following steps:
4.1) training parameter is set
Setting optimum experimental device is Adagrad, and learning rate 0.001, batch size are the mistake of 128, random deactivating layer
Probability living is 0.8, the spacing value margin of triple loss is 1.0;
4.2) training complement mark is set
Training complement mark is the number of iterations or setting verifying collection real-time detection model training situation for having reached setting, is tested
Card collection accuracy rate meets the threshold value of setting;
4.3) neural network model is saved
After the completion of training, the structure of neural network and weight are saved.
In step 5), finger vena original image to be identified is acquired using finger vein image acquisition equipment first,
Original image is handled using the image processing method in step 1), acquires the interested of input finger venous image
Then region will extract in the trained neural network model of the finger vena region of interest area image extracted input preservation
Feature vector is identified for finger venous image.
In step 6), the feature vector for the finger venous image to be identified that step 5) is extractedWherein xi recI-th of component of feature vector to be identified is represented, altogether
There is n component, passes through the feature vector that neural network is extracted with the template finger venous image in template libraryWherein xi modI-th of component of template characteristic vector is represented, a shared n is a
Component carries out Euclidean distance between vector
Compare, wherein d represents the Euclidean distance of two vectors calculated, the quadratic sum of the difference by calculating two vector respective components
Evolution obtains;Set distance threshold value is 0.8 times of margin value in training parameter, if the two Euclidean distance is less than the threshold of setting
It is worth, then finger venous image to be identified finger venous image class corresponding with the feature vector that corresponding templates finger venous image extracts
It is not identical;Conversely, if the two Euclidean distance is greater than given threshold, finger venous image to be identified and corresponding templates finger vena
It is not identical that the feature vector of image zooming-out corresponds to finger venous image classification;When the corresponding feature of finger venous image to be identified to
When some template finger venous image feature vector is similar in amount and database, that is, determine finger vena figure to be identified
As being that the template finger venous image corresponds to classification, if institute in the corresponding feature vector of finger venous image to be identified and database
Some equal inhomogeneities of template finger venous image, then be judged as that finger venous image to be identified is not included in the database.
Compared with prior art, the present invention have the following advantages that with the utility model has the advantages that
1, problem is identified using nerual network technique processing finger vena, can be not necessarily to engineer's image characteristics extraction
Under the premise of method, accurately identifying for finger venous image is realized, and accuracy rate is better than extracting spy based on traditional images processing
The finger vein identification method of sign.
2, using a variety of data enhancement methods, data augmentation is carried out to small-sized finger venous image data set, makes total
Meets the needs of neural metwork training according to amount.
3, network is constructed according to data set feature, so that model parameter amount matches with data volume;And use a variety of letters
Single practical skill improves network, efficiently solves the disadvantages of neural network is easy over-fitting and gradient disappearance.
4, triple loss function has been used, network can directly export the feature vector that image is mapped to theorem in Euclid space,
The similarity degree between image is described with the distance between feature vector, and method is facilitated simply to judge that classification is believed according to template
Breath, and convert the image into the memory space that feature vector uses and greatly reduce.
Detailed description of the invention
Fig. 1 is the method for the present invention flow chart.
Fig. 2 is identification network training and process for using figure.
Fig. 3 is random deactivating layer schematic diagram.
Fig. 4 a is combination convolution residual error modules A schematic diagram.
Fig. 4 b is combination convolution residual error module B schematic diagram.
Fig. 4 c is combination convolution residual error module C schematic diagram.
Fig. 4 d is that dimension reduces modules A schematic diagram.
Fig. 4 e is that dimension reduces module B schematic diagram.
Fig. 5 is that triple loses schematic diagram.
Specific embodiment
The present invention is further explained in the light of specific embodiments.
As depicted in figs. 1 and 2, small data set finger vena neural network based identification side provided by the present embodiment
Method.The following steps are included:
1) original finger venous image data are collected by finger vein image acquisition equipment, to original finger venous image
Area-of-interest (ROI) extraction is pre-processed and carried out, the original trained number of the area-of-interest picture construction extracted is utilized
According to collection.
Using the finger venous image collection taken in actual scene, i.e. raw data set.The spy of finger venous image collection
Point is that total number of images amount is less, image between classification.Using open source image procossing library OpenCV to finger vena detection instrument to original
Beginning image realizes that finger vena detection is aligned with finger vena, cuts out target finger vena, to guarantee that the image obtained meets
The basic demand of finger vena identification, that is, remove extra background.
2) the region of interest area image concentrated to original training data carries out data expansion to image using data enhancement methods
It fills, increases training sample data amount, construct training dataset.
Data extending is carried out to all categories image using data enhancement methods, comprising the following steps:
2.1) random image segment obtains
It is random to reduce part background information in the case where retaining enough finger vena information.
2.2) random image adjustment of color
The brightness of random adjustment image, contrast, saturation degree and form and aspect.
2.3) noise jamming is added at random
The Gaussian noise of (0,1) N is distributed as image addition.
2.4) image size normalization
Same size is converted the image into using interpolation method bilinear interpolation method.
3) according to training dataset feature, the neural network model for being suitble to data is constructed using convolutional neural networks, including
Following steps:
3.1) combined data chooses building model.
Model is mainly by random deactivating layer (shown in Figure 3), multiple combination convolution residual error modules, convolutional layer, down-sampling
The composition such as layer.
Input picture is 199x79x1, carries out step 2) and inputs depth residual error neural network afterwards.If the following non-volume of convolutional layer
Outer statement is all using line rectification function (Relu).
First layer is convolutional layer, and convolution kernel is (3,3), and step-length 2, filter quantity is 32, without using filling, exports and is
99×39×32。
The second layer is convolutional layer, and convolution kernel is (3,3), and step-length 1, filter quantity is 64, without using filling, exports and is
97×37×64。
Third layer is combination convolution residual error modules A, as shown in fig. 4 a.Module divides three tunnels to carry out convolution first: 1. there was only one
A convolutional layer, convolution kernel are (1,1), and step-length 1, filter quantity is 32, uses filling so that input and output are in the same size, defeated
It is out 97 × 37 × 32.2. first layer is (1,1) convolution kernel, and step-length 1, filter quantity is 32 there are two convolutional layer, use
So that input and output are in the same size, exporting is 97 × 37 × 32 for filling;The second layer is (3,3) convolution kernel, step-length 1, filter
Quantity is 32, uses filling so that input and output are in the same size, exporting is 97 × 37 × 32.3. there are three convolutional layer, first layer
For (1,1) convolution kernel, step-length 1, filter quantity is 32, uses filling so that input and output are in the same size, export as 97 ×
37×32;The second layer is (3,3) convolution kernel, and step-length 1, filter quantity is 32, uses filling so that input and output size one
It causes, exporting is 97 × 37 × 32;Third layer is (3,3) convolution kernel, and step-length 1, filter quantity is 32, uses filling so that defeated
It is consistent to enter output size, exporting is 97 × 37 × 32.Stacking on the axis of channel is carried out to the output of three accesses, export as 97 ×
37 × 96, a convolutional layer is then inputted, convolution kernel is (1,1), and step-length 1, filter quantity is the port number of input, i.e.,
64, use filling so that input and output are in the same size, exporting is 97 × 37 × 64, and output characteristic pattern does not use activation primitive, is obtained
It is exported to centre.Then carry out residual error connection, i.e., carry out 0.17 times of diminution to intermediate output, then with combine convolution residual error mould
The input of block A carries out residual error connection, if x is module input, IA(x) input is represented by branch convolution as a result, IRA(x) it represents
Combine the output of convolution residual error modules A, as IRA(x)=IA(x) then * 0.17+x passes through Relu activation primitive, exporting is 97
×37×64。
4th layer reduces modules A for dimension, and as shown in figure 4d, three tunnels of input point are handled, two-way convolution, all the way pond
Change: 1. only one convolutional layer, convolution kernel are (3,3), and step-length 2, filter quantity is 96, and without using filling, exporting is 48
×18×96.2. first layer is (1,1) convolution kernel, and step-length 1, filter quantity is 48 there are three convolutional layer, made using filling
Input and output it is in the same size, exporting is 97 × 37 × 48;The second layer is (3,3) convolution kernel, and step-length 1, filter quantity is
48, use filling so that input and output are in the same size, exporting is 97 × 37 × 48;Third layer be (3,3) convolution kernel, step-length 2,
Filter quantity is 64, and without using filling, exporting is 48 × 18 × 64.3. have a maximum pond layer, pond size for (3,
3), step-length 2, without using filling, exporting is 48 × 18 × 64.Stacking on the axis of channel is carried out to the output of three accesses, it is defeated
It is out 48 × 18 × 224.
Layer 5 is combination convolution residual error module B, as shown in Figure 4 b, the combination convolution residual error modules A of structure and third layer
It is roughly the same, the main distinction be branch conventional part connected with residual error in scaling multiple.That is after input combination convolution module B
Divide two-way to carry out convolution: 1. having a convolutional layer, convolution kernel is (1,1), and step-length 1, filter quantity is 128, uses filling
So that input and output are in the same size, exporting is 48 × 18 × 128.2. first layer convolution kernel is (1,1), step there are three convolutional layer
A length of 1, filter quantity is 128, uses filling so that input and output are in the same size, exporting is 48 × 18 × 128;Second layer volume
Product core is (1,7), and step-length 1, filter quantity is 128, uses filling so that input and output are in the same size, exporting is 48 × 18
×128;Third layer convolution kernel is (7,1), and step-length 1, filter quantity is 128, uses filling so that input and output size one
It causes, exporting is 48 × 18 × 128.Stacking on the axis of channel is carried out to the output of two paths, exporting is 48 × 18 × 256, so
Inputting a convolutional layer afterwards, convolution kernel is (1,1), step-length 1, and filter quantity be the port number inputted, i.e., 224, using filling out
It fills so that input and output are in the same size, exporting is 48 × 18 × 256, and output characteristic pattern does not use activation primitive, is obtained intermediate defeated
Out.Then residual error connection is carried out, residual error connection is roughly the same with third layer, and difference is that scaling multiple is 0.1.Output size is
48×18×224。
Layer 6 is that dimension reduces module B, and as shown in fig 4e, structure is roughly the same with the 4th layer of dimension reduction module,
Difference, which is to input, is divided into four tunnels progress convolution sum pond: 1. there are two convolutional layer, first layer convolution kernel is (1,1), and step-length is
1, filter quantity is 64, uses filling so that input and output are in the same size, exporting is 48 × 18 × 64;Second layer convolution kernel is
(3,3), step-length 2, filter quantity are 96, and without using filling, exporting is 23 × 8 × 96;2. there are two convolutional layer, first
Layer convolution kernel is (1,1), and step-length 1, filter quantity is 64, uses filling so that input and output are in the same size, exporting is 48
×18×64;Second layer convolution kernel is (3,3), step-length 2, and filter quantity is 64, without using filling, export as 23 × 8 ×
64;3. first layer convolution kernel is (1,1), and step-length 1, filter quantity is 64 there are three convolutional layer, filling is used so that input
Output size is consistent, and exporting is 48 × 18 × 64;Second layer convolution kernel is (3,3), and step-length 1, filter quantity is 64, is used
So that input and output are in the same size, exporting is 48 × 18 × 64 for filling;Third layer convolution kernel is (3,3), step-length 2, filter
Quantity is 64, and without using filling, exporting is 23 × 8 × 64;4. there is a maximum pond layer, pond size is (3,3), and step-length is
2, without using filling, exporting is 23 × 8 × 224.Stacking on the axis of channel is carried out to the output of four accesses, exporting is 23 × 8
×448。
Layer 7 is combination convolution residual error module C, as illustrated in fig. 4 c, the combination convolution residual error modules A of structure and third layer
It is roughly the same, the main distinction be branch conventional part connected with residual error in scaling multiple.That is input combination convolution residual error mould
Dividing two-way to carry out convolution after block C first: 1. having a convolutional layer, convolution kernel is (1,1), and step-length 1, filter quantity is 48,
Use filling so that input and output are in the same size, exporting is 23 × 8 × 48.2. there are three convolutional layer, first layer convolution kernel be (1,
1), step-length 1, filter quantity are 48, use filling so that input and output are in the same size, exporting is 23 × 8 × 48;The second layer
Convolution kernel is (1,3), and step-length 1, filter quantity is 48, uses filling so that input and output are in the same size, exporting is 23 × 8
×48;Third layer convolution kernel is (3,1), and step-length 1, filter quantity is 48, uses filling so that input and output are in the same size,
Output is 23 × 8 × 48.Stacking on the axis of channel is carried out to the output of two paths, exporting is 23 × 8 × 96, then inputs one
A convolutional layer, convolution kernel be (1,1), step-length 1, filter quantity be input port number, i.e., 448, use fill so that defeated
It is consistent to enter output size, exporting is 23 × 8 × 448, and output characteristic pattern does not use activation primitive, obtains intermediate output.Then into
The connection of row residual error, residual error connection is roughly the same with third layer, and difference is that scaling multiple is 0.1.Output size be 23 × 8 ×
448。
8th layer is combination convolution residual error module C, and structure is roughly the same with the combination convolution residual error module C of layer 7, only
One difference is residual error connection without scaling, and without connecting activation primitive after residual error connection.Output size is 23 × 8 × 448.
9th layer be convolutional layer, convolution kernel size be (1,1), step-length 1, filter quantity be 384, use filling so that
Input and output are in the same size, and exporting is 23 × 8 × 448.
Tenth layer be convolutional layer, convolution kernel size be (1,1), step-length 1, filter quantity be 128, use filling so that
Input and output are in the same size, and output size is 23 × 8 × 128.
Eleventh floor is average pond layer, and pond size is (23,8), and without using filling, output size is 1 × 1 × 128.
Floor 12 is developer layer, reduces input dimension, and output size is 1 × 128.
13rd layer is random deactivating layer, and formula is as follows:
r(l)~Bernoulli (p)
Y~(l)=r(l)·y(l)
Wherein r(l)It obeys the Bernoulli Jacob that probability is p to be distributed, y(l)For l layers of activation value, y~(l)For l layers of output
Value,For l+1 layers of network weight, bi (l+1)For l+1 layers of biasing.It is inputted for l+1 layers of hidden layers, f
() is activation primitive.
14th layer normalizes layer for a L2 norm.Output is along second normalized feature vector of axis.Output is spy
Feature vector of the vector after the normalization of L2 norm is levied, size is 1 × 128.Formula is as follows:
Take ∈=10-9
3.2) loss function is set
Setting loss function is triple loss function, is lost by triple, and network may learn image to European
Similarity measurements figureofmerit between vector in space.Target is will to refer to that vein image is mapped to Euclidean distance space, using away from
From the standard as classification, distance is smaller between the same category, it is different classes of between distance it is larger.As shown in figure 5, using difficult
Sample Mining Strategy, triple loss function formula are as follows:
Wherein LtripletTriple loss is represented, wherein symbol α represents the theorem in Euclid space that anchor point corresponds to neural network output
Vector, symbol p represent the theorem in Euclid space vector of the corresponding neural network output of identical with anchor point classification sample, symbol n representative and
The different sample of anchor point classification corresponds to the theorem in Euclid space vector of neural network output, between margin is represented between positive and negative class sample
Every m represents once trained number of samples.
4) neural metwork training is carried out to the model specification training parameter of design, and saves trained neural network model ginseng
Number;Wherein model is trained, comprising the following steps:
4.1) training parameter is set
Setting optimum experimental device is Adagrad, and learning rate 0.001, batch size are the mistake of 128, random deactivating layer
Probability living is 0.8, the spacing value margin of triple loss is 1.0.
4.2) training complement mark is set
It is to have reached the number of iterations that training complement mark, which is arranged,.
4.3) neural network model is saved
After the completion of training, the structure of neural network and weight are saved.
5) finger vena original image to be identified is acquired using finger vein image acquisition equipment, using in step 1)
Image processing method handles original image, i.e., is based on horizontal edge using open source image procossing library OpenCV and detects finger
The detection of vein area-of-interest, acquires the area-of-interest of input finger venous image.Then the finger extracted is quiet
Arteries and veins region of interest area image is input in the trained neural network model of step 4), is extracted feature vector and is used for finger vena figure
As identification.
6) feature vector for the finger venous image to be identified for extracting step 5)
Wherein xi recRepresent i-th of component of feature vector to be identified, a shared n component, with the template finger vena in template library
Image passes through the feature vector that neural network is extractedWherein xi modRepresent template spy
I-th of component of vector is levied, a shared n component carries out Euclidean distance between vectorCompare, wherein d represents two calculated
The evolution of the Euclidean distance of a vector, the quadratic sum of the difference by calculating two vector respective components obtains.Set distance threshold value
It is 0.8 times of margin value in training parameter, if the two Euclidean distance is less than the threshold value of setting, finger venous image to be identified
It is identical as the corresponding finger venous image classification of feature vector that corresponding templates finger venous image extracts;Conversely, if the two is European
Distance is greater than given threshold, then finger venous image to be identified is corresponding with the feature vector that corresponding templates finger venous image extracts
Finger venous image classification is not identical.When some template in the corresponding feature vector of finger venous image to be identified and database
When finger venous image feature vector is similar, that is, determine that finger venous image to be identified is the template finger venous image
Corresponding classification, if the corresponding feature vector of finger venous image to be identified and template finger venous image all in database are equal
Inhomogeneity is then judged as that finger venous image to be identified is not included in the database.
Embodiment described above is only the preferred embodiments of the invention, and but not intended to limit the scope of the present invention, therefore
All shapes according to the present invention change made by principle, should all be included within the scope of protection of the present invention.
Claims (7)
1. a kind of small data set finger vein identification method neural network based, which comprises the following steps:
1) original finger venous image data are collected by finger vein image acquisition equipment, original finger venous image is carried out
Region of interesting extraction is pre-processed and carried out, the area-of-interest picture construction original training data collection extracted is utilized;
2) the region of interest area image concentrated to original training data carries out data extending to image using data enhancement methods,
Increase training sample data amount, constructs training dataset;
3) according to training dataset feature, required neural network model is constructed;
4) neural metwork training is carried out to the model specification training parameter of design, and saves trained neural network model parameter;
5) finger vena original image to be identified is obtained using finger vein image acquisition equipment, extracts finger vena to be identified
Then region of interest area image to be identified is inputted the nerve net kept by the finger vena region of interest area image of original image
In network model, the corresponding feature vector output of finger venous image to be identified is obtained;
6) template image in the feature vector and database that neural network is extracted is inputted using finger venous image to be identified
Corresponding feature vector carries out the Distance Judgment between vector, if the corresponding neural network output of the region of interest area image extracted
The distance between feature vector feature vector corresponding with template image be less than given threshold, then judge that the finger to be identified is quiet
The template finger venous image classification stored in arteries and veins image and database is identical, i.e. the same root finger of the same person, if two
Distance is greater than given threshold between vector, then judges that the template finger stored in finger venous image to be identified and database is quiet
Arteries and veins image category is not identical.
2. a kind of small data set finger vein identification method neural network based according to claim 1, feature exist
In, in step 1), the finger venous image data taken in actual scene using finger vein image acquisition equipment, i.e.,
The characteristics of raw data set, finger venous image collection is that quantity is small and unbalanced, and total number of images amount is few, i.e. everyone every hand
Refer to that vein image only acquires 3 to 6;Finger horizontal edge is extracted to original image using image processing techniques and to existing hand
Refer to that image rotation carries out rotation correction;Then the up-and-down boundary that area-of-interest is determined by extracting horizontal edge, according to acquisition
The boundary of frame determines that the right boundary of finger area-of-interest carries out finger region of interesting extraction, with the center of area-of-interest
The identical target finger vena area-of-interest of size is cut out to cut the center of image, to guarantee that the image obtained meets hand
The requirement for referring to hand vein recognition, that is, remove extra background.
3. a kind of small data set finger vein identification method neural network based according to claim 1, feature exist
In in step 2), using data enhancement methods to all categories image progress data extending, comprising the following steps:
2.1) random image rotates
Random-Rotation in small angle range carried out to the area-of-interest of finger vena, the angle of rotation -5 degree to 5 degree it
Between randomly select;
2.2) random image adjustment of color
Brightness, contrast and the saturation degree of random adjustment image;
2.3) noise jamming is added at random
The Gaussian noise of (0,1) N is distributed as image addition;
2.4) image size normalization
Same size is converted the image into using bilinear interpolation method.
4. a kind of small data set finger vein identification method neural network based according to claim 1, feature exist
In, in step 3), utilize convolutional neural networks construct be suitble to data neural network model, comprising the following steps:
3.1) combined data chooses building model
Model is mainly made of multiple combination convolution residual errors, convolutional layer, pond layer and random deactivating layer these neural network modules;
Input picture inputs depth residual error neural network in progress step 2) afterwards;
First layer is convolutional layer;The second layer is convolutional layer;Third layer is a combination convolution residual error modules A, it is by 7 convolutional layers
Composition;4th layer reduces modules A for a dimension, it is made of 4 convolutional layers and 1 maximum pond layer;Layer 5 is one
Convolution residual error module B is combined, it is made of 5 convolutional layers;Layer 6 is that dimension reduces module B, it by 7 convolutional layers and
1 maximum pond layer composition;Layer 7 is a combination convolution residual error module C, it is made of 5 convolutional layers;8th layer is one
A combination convolution residual error module C, it is made of 5 convolutional layers;9th layer is a convolutional layer;Tenth layer is a convolutional layer;
Eleventh floor is an average pond layer;Floor 12 is a developer layer;13rd layer is a random deactivating layer;Formula is such as
Under:
r(l)~Bernoulli (p)
Y~(l)=r(l)·y(l)
Wherein, r(l)It obeys the Bernoulli Jacob that probability is p to be distributed, y(l)For l layers of activation value, y~(l)For l layers of output valve, wi (l+1)For l+1 layers of network weight, bi (l+1)For l+1 layers of biasing, zi (l+1)It is inputted for l+1 layers of hidden layers, f ()
For activation primitive;
14th layer normalizes layer for a L2 norm, exports final feature vector;
3.2) loss function is set
Setting loss function is triple loss function, is lost by triple, and network can learn to image to theorem in Euclid space
In vector between similarity measurements figureofmerit;Target is will to refer to that vein image is mapped to Euclidean distance space, is made using distance
For the standard of classification, distance is small between the same category, it is different classes of between distance it is big, triple loss function formula is as follows:
Wherein, LtripletTriple loss is represented, wherein symbol α represents the theorem in Euclid space vector that anchor point corresponds to neural network output,
Symbol p represents the theorem in Euclid space vector of the corresponding neural network output of identical with anchor point classification sample, symbol n representative and anchor point class
Not different samples corresponds to the theorem in Euclid space vector of neural network output, and margin represents the interval between positive and negative class sample, m generation
The number of samples that table is once trained.
5. a kind of small data set finger vein identification method neural network based according to claim 1, feature exist
In being trained to model in step 4), comprising the following steps:
4.1) training parameter is set
Setting optimum experimental device is Adagrad, and learning rate 0.001, batch size are 128, the inactivation of random deactivating layer is general
Rate is 0.8, the spacing value margin of triple loss is 1.0;
4.2) training complement mark is set
Training complement mark is the number of iterations or setting verifying collection real-time detection model training situation for having reached setting, verifying collection
Accuracy rate meets the threshold value of setting;
4.3) neural network model is saved
After the completion of training, the structure of neural network and weight are saved.
6. a kind of small data set finger vein identification method neural network based according to claim 1, feature exist
In: in step 5), finger vena original image to be identified is acquired using finger vein image acquisition equipment first, uses step
It is rapid 1) in image processing method original image is handled, acquire input finger venous image area-of-interest,
Then feature will be extracted in the trained neural network model of the finger vena region of interest area image extracted input preservation
Vector is identified for finger venous image.
7. a kind of small data set finger vena neural network based identification side according to claim 1
Method, it is characterised in that: in step 6), the feature vector for the finger venous image to be identified that step 5) is extractedWherein xi recI-th of component of feature vector to be identified is represented,
One shared n component passes through the feature vector that neural network is extracted with the template finger venous image in template libraryWherein xi modI-th of component of template characteristic vector is represented, a shared n is a
Component carries out Euclidean distance between vector
Compare, wherein d represents the Euclidean distance of two vectors calculated, the quadratic sum of the difference by calculating two vector respective components
Evolution obtains;Set distance threshold value is 0.8 times of margin value in training parameter, if the two Euclidean distance is less than the threshold of setting
It is worth, then finger venous image to be identified finger venous image class corresponding with the feature vector that corresponding templates finger venous image extracts
It is not identical;Conversely, if the two Euclidean distance is greater than given threshold, finger venous image to be identified and corresponding templates finger vena
It is not identical that the feature vector of image zooming-out corresponds to finger venous image classification;When the corresponding feature of finger venous image to be identified to
When some template finger venous image feature vector is similar in amount and database, that is, determine finger vena figure to be identified
As being that the template finger venous image corresponds to classification, if institute in the corresponding feature vector of finger venous image to be identified and database
Some equal inhomogeneities of template finger venous image, then be judged as that finger venous image to be identified is not included in the database.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910615984.7A CN110427832A (en) | 2019-07-09 | 2019-07-09 | A kind of small data set finger vein identification method neural network based |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910615984.7A CN110427832A (en) | 2019-07-09 | 2019-07-09 | A kind of small data set finger vein identification method neural network based |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110427832A true CN110427832A (en) | 2019-11-08 |
Family
ID=68410386
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910615984.7A Pending CN110427832A (en) | 2019-07-09 | 2019-07-09 | A kind of small data set finger vein identification method neural network based |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110427832A (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110942001A (en) * | 2019-11-14 | 2020-03-31 | 五邑大学 | Method, device, equipment and storage medium for identifying finger vein data after expansion |
CN111178229A (en) * | 2019-12-26 | 2020-05-19 | 南京航空航天大学 | Vein imaging method and device based on deep learning |
CN111274915A (en) * | 2020-01-17 | 2020-06-12 | 华南理工大学 | Depth local aggregation descriptor extraction method and system for finger vein image |
CN111401358A (en) * | 2020-02-25 | 2020-07-10 | 华南理工大学 | Instrument dial plate correction method based on neural network |
CN111428643A (en) * | 2020-03-25 | 2020-07-17 | 智慧眼科技股份有限公司 | Finger vein image recognition method and device, computer equipment and storage medium |
CN111652285A (en) * | 2020-05-09 | 2020-09-11 | 济南浪潮高新科技投资发展有限公司 | Tea cake category identification method, equipment and medium |
CN111950461A (en) * | 2020-08-13 | 2020-11-17 | 南京邮电大学 | Finger vein identification method based on deformation detection and correction of convolutional neural network |
CN112036316A (en) * | 2020-08-31 | 2020-12-04 | 中国科学院半导体研究所 | Finger vein identification method and device, electronic equipment and readable storage medium |
CN112132137A (en) * | 2020-09-16 | 2020-12-25 | 山西大学 | FCN-SPP-Focal Net-based method for identifying correct direction of abstract picture image |
CN112434574A (en) * | 2020-11-11 | 2021-03-02 | 西安理工大学 | Knuckle print identification method under non-limited state |
CN112580590A (en) * | 2020-12-29 | 2021-03-30 | 杭州电子科技大学 | Finger vein identification method based on multi-semantic feature fusion network |
CN112733627A (en) * | 2020-12-28 | 2021-04-30 | 杭州电子科技大学 | Finger vein identification method based on fusion of local feature network and global feature network |
CN113408705A (en) * | 2021-06-30 | 2021-09-17 | 中国工商银行股份有限公司 | Neural network model training method and device for image processing |
CN113505716A (en) * | 2021-07-16 | 2021-10-15 | 重庆工商大学 | Training method of vein recognition model, and recognition method and device of vein image |
CN113643238A (en) * | 2021-07-14 | 2021-11-12 | 南京航空航天大学 | Mobile phone terminal vein imaging method based on deep learning |
CN113705519A (en) * | 2021-09-03 | 2021-11-26 | 杭州乐盯科技有限公司 | Fingerprint identification method based on neural network |
CN114863138A (en) * | 2022-07-08 | 2022-08-05 | 腾讯科技(深圳)有限公司 | Image processing method, image processing apparatus, storage medium, and device |
CN115944293A (en) * | 2023-03-15 | 2023-04-11 | 汶上县人民医院 | Neural network-based hemoglobin level prediction system for kidney dialysis |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106529468A (en) * | 2016-11-07 | 2017-03-22 | 重庆工商大学 | Finger vein identification method and system based on convolutional neural network |
-
2019
- 2019-07-09 CN CN201910615984.7A patent/CN110427832A/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106529468A (en) * | 2016-11-07 | 2017-03-22 | 重庆工商大学 | Finger vein identification method and system based on convolutional neural network |
Non-Patent Citations (3)
Title |
---|
CHRISTIAN SZEGEDY ET AL.: "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning", 《ARXIV》 * |
公安部第三研究所: "《多摄像机协同关注目标检测跟踪技术》", 30 June 2017, 东南大学出版社 * |
黄志星: "基于卷积神经网络的嵌入式指静脉识别系统", 《中国优秀硕士学位论文全文数据库信息科技辑》 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110942001A (en) * | 2019-11-14 | 2020-03-31 | 五邑大学 | Method, device, equipment and storage medium for identifying finger vein data after expansion |
CN111178229A (en) * | 2019-12-26 | 2020-05-19 | 南京航空航天大学 | Vein imaging method and device based on deep learning |
CN111274915A (en) * | 2020-01-17 | 2020-06-12 | 华南理工大学 | Depth local aggregation descriptor extraction method and system for finger vein image |
CN111274915B (en) * | 2020-01-17 | 2023-04-28 | 华南理工大学 | Deep local aggregation descriptor extraction method and system for finger vein image |
CN111401358A (en) * | 2020-02-25 | 2020-07-10 | 华南理工大学 | Instrument dial plate correction method based on neural network |
CN111401358B (en) * | 2020-02-25 | 2023-05-09 | 华南理工大学 | Instrument dial correction method based on neural network |
CN111428643A (en) * | 2020-03-25 | 2020-07-17 | 智慧眼科技股份有限公司 | Finger vein image recognition method and device, computer equipment and storage medium |
CN111652285A (en) * | 2020-05-09 | 2020-09-11 | 济南浪潮高新科技投资发展有限公司 | Tea cake category identification method, equipment and medium |
CN111950461B (en) * | 2020-08-13 | 2022-07-12 | 南京邮电大学 | Finger vein identification method based on deformation detection and correction of convolutional neural network |
CN111950461A (en) * | 2020-08-13 | 2020-11-17 | 南京邮电大学 | Finger vein identification method based on deformation detection and correction of convolutional neural network |
CN112036316A (en) * | 2020-08-31 | 2020-12-04 | 中国科学院半导体研究所 | Finger vein identification method and device, electronic equipment and readable storage medium |
CN112036316B (en) * | 2020-08-31 | 2023-12-15 | 中国科学院半导体研究所 | Finger vein recognition method, device, electronic equipment and readable storage medium |
CN112132137A (en) * | 2020-09-16 | 2020-12-25 | 山西大学 | FCN-SPP-Focal Net-based method for identifying correct direction of abstract picture image |
CN112434574A (en) * | 2020-11-11 | 2021-03-02 | 西安理工大学 | Knuckle print identification method under non-limited state |
CN112434574B (en) * | 2020-11-11 | 2024-04-09 | 西安理工大学 | Knuckle pattern recognition method under unrestricted state |
CN112733627B (en) * | 2020-12-28 | 2024-02-09 | 杭州电子科技大学 | Finger vein recognition method based on fusion local and global feature network |
CN112733627A (en) * | 2020-12-28 | 2021-04-30 | 杭州电子科技大学 | Finger vein identification method based on fusion of local feature network and global feature network |
CN112580590A (en) * | 2020-12-29 | 2021-03-30 | 杭州电子科技大学 | Finger vein identification method based on multi-semantic feature fusion network |
CN112580590B (en) * | 2020-12-29 | 2024-04-05 | 杭州电子科技大学 | Finger vein recognition method based on multi-semantic feature fusion network |
CN113408705A (en) * | 2021-06-30 | 2021-09-17 | 中国工商银行股份有限公司 | Neural network model training method and device for image processing |
CN113643238A (en) * | 2021-07-14 | 2021-11-12 | 南京航空航天大学 | Mobile phone terminal vein imaging method based on deep learning |
CN113505716B (en) * | 2021-07-16 | 2022-07-01 | 重庆工商大学 | Training method of vein recognition model, and recognition method and device of vein image |
CN113505716A (en) * | 2021-07-16 | 2021-10-15 | 重庆工商大学 | Training method of vein recognition model, and recognition method and device of vein image |
CN113705519B (en) * | 2021-09-03 | 2024-05-24 | 杭州乐盯科技有限公司 | Fingerprint identification method based on neural network |
CN113705519A (en) * | 2021-09-03 | 2021-11-26 | 杭州乐盯科技有限公司 | Fingerprint identification method based on neural network |
CN114863138B (en) * | 2022-07-08 | 2022-09-06 | 腾讯科技(深圳)有限公司 | Image processing method, device, storage medium and equipment |
CN114863138A (en) * | 2022-07-08 | 2022-08-05 | 腾讯科技(深圳)有限公司 | Image processing method, image processing apparatus, storage medium, and device |
CN115944293B (en) * | 2023-03-15 | 2023-05-16 | 汶上县人民医院 | Neural network-based hemoglobin level prediction system for kidney dialysis |
CN115944293A (en) * | 2023-03-15 | 2023-04-11 | 汶上县人民医院 | Neural network-based hemoglobin level prediction system for kidney dialysis |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110427832A (en) | A kind of small data set finger vein identification method neural network based | |
CN107886064B (en) | Face recognition scene adaptation method based on convolutional neural network | |
CN110348330B (en) | Face pose virtual view generation method based on VAE-ACGAN | |
CN104866829B (en) | A kind of across age face verification method based on feature learning | |
CN110543846B (en) | Multi-pose face image obverse method based on generation countermeasure network | |
CN106204779B (en) | Check class attendance method based on plurality of human faces data collection strategy and deep learning | |
CN110348376A (en) | A kind of pedestrian's real-time detection method neural network based | |
CN103679151B (en) | A kind of face cluster method merging LBP, Gabor characteristic | |
CN107977609A (en) | A kind of finger vein identity verification method based on CNN | |
CN106778785B (en) | Construct the method for image Feature Selection Model and the method, apparatus of image recognition | |
CN108334848A (en) | A kind of small face identification method based on generation confrontation network | |
CN108182397B (en) | Multi-pose multi-scale human face verification method | |
CN106529395B (en) | Signature image identification method based on depth confidence network and k mean cluster | |
CN108921019A (en) | A kind of gait recognition method based on GEI and TripletLoss-DenseNet | |
CN107480649A (en) | Fingerprint sweat pore extraction method based on full convolution neural network | |
CN108764041A (en) | The face identification method of facial image is blocked for lower part | |
CN109598234A (en) | Critical point detection method and apparatus | |
He et al. | Automatic magnetic resonance image prostate segmentation based on adaptive feature learning probability boosting tree initialization and CNN-ASM refinement | |
CN110390282A (en) | A kind of finger vein identification method and system based on the loss of cosine center | |
CN109948467A (en) | Method, apparatus, computer equipment and the storage medium of recognition of face | |
CN110472495B (en) | Deep learning face recognition method based on graphic reasoning global features | |
CN109977887A (en) | A kind of face identification method of anti-age interference | |
CN106709418A (en) | Face identification method based on scene photo and identification photo and identification apparatus thereof | |
CN110334566A (en) | Fingerprint extraction method inside and outside a kind of OCT based on three-dimensional full convolutional neural networks | |
CN103778430B (en) | Rapid face detection method based on combination between skin color segmentation and AdaBoost |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191108 |