CN106326886B - Finger vein image quality appraisal procedure based on convolutional neural networks - Google Patents

Finger vein image quality appraisal procedure based on convolutional neural networks Download PDF

Info

Publication number
CN106326886B
CN106326886B CN201610979315.4A CN201610979315A CN106326886B CN 106326886 B CN106326886 B CN 106326886B CN 201610979315 A CN201610979315 A CN 201610979315A CN 106326886 B CN106326886 B CN 106326886B
Authority
CN
China
Prior art keywords
image
layer
quality
convolutional neural
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610979315.4A
Other languages
Chinese (zh)
Other versions
CN106326886A (en
Inventor
秦华锋
何希平
姚行艳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Weimai Zhilian Technology Co ltd
Original Assignee
Chongqing Technology and Business University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Technology and Business University filed Critical Chongqing Technology and Business University
Priority to CN201610979315.4A priority Critical patent/CN106326886B/en
Publication of CN106326886A publication Critical patent/CN106326886A/en
Application granted granted Critical
Publication of CN106326886B publication Critical patent/CN106326886B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The present invention provides a kind of finger vein image quality appraisal procedure and system based on convolutional neural networks, this method is that opponent first refers to that the quality of vein gray level image is labeled, next establishes training sample set, is then trained using training sample set to convolutional neural networks model.Finally any one width gray level image and bianry image are input in trained model, the output for choosing the second full articulamentum in two convolutional neural networks models respectively is the depth characteristic vector of input gray level image and bianry image;It connects two depth characteristic vectors and forms Combined expression vector, and be entered into support vector machines and be trained, the quality of prediction finger venous image is calculated using probabilistic SVMs;The appraisal procedure and assessment system can largely promote the precision of finger vein image quality assessment, improve the recognition performance of Verification System.

Description

Finger vein image quality evaluation method based on convolutional neural network
Technical Field
The invention belongs to the technical field of biological feature recognition, and particularly relates to a finger vein image quality evaluation method and system based on a convolutional neural network.
Background
With the rapid development of internet technology and the increase of information security threats, how to effectively authenticate identities to protect the security of individuals and properties becomes an urgent problem to be solved. Biometric features based on physiology and behavior are difficult to steal, copy and lose compared to traditional authentication means such as keys and passwords. Accordingly, biometric authentication techniques have been widely researched and successfully applied to personal authentication. Physiological-based biological modalities can be classified into the following two types: 1 external modalities such as face, fingerprint, palmprint and iris; 2 internal biological modality: finger veins, palm veins, and back hand veins. Systems based on external biological modalities are vulnerable to attack. For example, it is easy to steal and forge a fingerprint template to attack a fingerprint identification system. Unlike extrinsic biological modalities, intrinsic biological modalities are located under the epidermis of the finger making it difficult to steal and counterfeit, and therefore they have higher security performance.
Finger vein authentication remains a challenging task because the acquisition process of finger vein images is affected by a variety of factors, such as ambient light, ambient temperature, light scattering, changes in physiological characteristics, and the behavior of the user. If these factors are not overcome well, then a large number of low quality images are included in the acquired image. Generally, these low quality images eventually degrade the performance of the authentication system. Finger vein image quality assessment has been extensively studied as an effective solution. In existing finger vein image quality assessment algorithms, researchers assume that such factors as image contrast and number of veins are related to the quality of the image. Then, some artificially designed descriptors such as: radon transform, gaussian energy model, Gabor filter, curvature detection. For example, CN101866486 discloses a method for judging quality of a finger vein image, which evaluates the quality of the finger vein image by obtaining a contrast quality score, a position deviation quality score, an effective area quality score, and a direction ambiguity quality score of the finger vein image, and further obtaining the quality scores to be accumulated according to a weight for comprehensive evaluation, and establishing a comprehensive evaluation quality function of the finger vein image, but the methods still have the following disadvantages:
1 it is difficult to prove that the manually selected attribute must be relevant to the image quality of the finger vein. For example, some high quality images based on human vision or understanding are rejected by the authentication system.
It is not possible for 2 researchers to investigate all attributes that affect image quality.
3 even if these properties are positively correlated to the quality of the image, it is difficult to build an effective mathematical model to describe them.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a finger vein image quality evaluation method and system based on a convolutional neural network. Firstly, the defect that the traditional finger vein image quality evaluation method evaluates the image quality by means of intuition or priori knowledge is overcome, and the image quality can be evaluated more objectively. Second, the invention can automatically generate quality labels of images, thereby reducing the heavy work and errors caused by artificial labeling. Thirdly, the invention can automatically learn the characteristics related to the image quality from the original finger vein image, thereby avoiding the problem of artificially selecting and extracting the distinguishing characteristics.
The specific technical scheme of the invention is as follows:
a finger vein image quality evaluation method based on a convolutional neural network comprises the following steps:
s1: marking the quality of the finger vein gray level image in the database to obtain a gray level image with a quality label, namely marking a low-quality gray level image and a high-quality gray level image, obtaining vein features of the gray level image with the quality label, and coding to obtain a binary image;
s2: establishing a binary image training sample set with a quality label in the step S1;
s3: establishing a gray level image training sample set with a quality label in the step S1;
s4: extracting a convolution neural network model of the depth features of the gray level image; the convolutional neural network model includes: the device comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a third convolution layer, a first full-connection layer, a second full-connection layer and an output layer;
s5: extracting a convolutional neural network model of the binary image depth features; the convolutional neural network model includes: the device comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a first full-connection layer, a second full-connection layer and an output layer;
s6: training of convolutional neural network models
Initializing each layer of filter by using a random number obeying Gaussian distribution, wherein the initial value of the offset is an arbitrary constant; training the convolutional neural network by adopting a random gradient descent method; dividing the binary image training sample set established in the step S2 and the gray level image training sample set established in the step S3 into different subsets, inputting the subsets into the convolutional neural network model applied in the step S5 and the step S4, calculating a gradient and performing backward propagation to update the filter weight and the offset after the images of all the batches are subjected to forward propagation in the convolutional neural network model once, and searching the optimal solution of the filter and the offset through repeated iteration;
s7: after the training is finished, inputting the predicted finger vein image into the convolutional neural network model in the steps S4 and S5, and selecting the output of the second full-connection layer in the convolutional neural network model in the steps S4 and S5 as the depth feature vector of inputting a gray image and a binary image; connecting the two depth characteristic vectors to form a joint expression vector of the input prediction finger vein image;
s8: the joint expression vector formed in step S7 is input to a support vector machine for training, and the quality of the predicted finger vein image is calculated using a probabilistic support vector machine.
Further improvement, the specific method for marking the quality of the finger vein gray level image in the database comprises the following steps:
s11: selection of enrollment template images
Selecting any one image of one finger, extracting and matching two finger vein images by utilizing a mature recognition algorithm method, and calculating the average distance between the image and the rest images; selecting an image corresponding to the minimum average distance as a registration template image of the finger, and taking other images as test images;
s12: image quality annotation
Calculating the distance between each test image of the same finger and the registered template image of the same finger to obtain an in-class matching score; calculating the distance between the registered template images to obtain the matching score between classes; calculating an error acceptance rate FAR and an error rejection rate FRR according to the intra-class matching score and the inter-class matching score; a threshold is preset, and when the false acceptance rate FAR is equal to the threshold, if FAR is 0.1%, a low-quality image or a high-quality image is distinguished according to whether the image is rejected by the system error or the image is accepted correctly.
In a further development, in the first, second or third convolution layer, the characteristic image of the first layerCalculated according to the following formula:
wherein,is the input spectrum of the l-th layer,is the convolution kernel between the mth input and the n output feature spectra, is the convolution operation, Ml-1Is the number of input feature spectra,is the shift of the nth output spectrum.
In a further refinement, a modified linear element is used in the first convolutional layer, the second convolutional layer or the third convolutional layer as an excitation function, which is defined as follows:
wherein,representing the output spectrum of the l-th layer.
In the first pooling layer and the second pooling layer, output characteristic spectrums of the first convolution layer and the second convolution layer are divided into non-overlapping regions, and the average value of the first p maximum values in each region is selected as a representative value of the region to sample the output of the first convolution layer and the second convolution layer; let IkRepresenting the output spectrum after convolution with the kth convolution kernel,is shown as pair IkAll elements in the middle sxs local regionSorting the obtained sets from large to small, wherein T is s multiplied by s to represent the number of elements; i iskOutput characteristics obtained after samplingSign forCalculated according to the following formula:
in a further development, the filter weights w of the step ggThe update rule of (1) is:
wg+1=Δg+1+wg
where Δ represents momentum, λ is the learning rate,is wgOf the gradient of (c).
In a further improvement, the probability support vector machine is trained by combining the depth feature vector V and the quality label q ∈ {0,1}, and the output probability value is p
ξ (v) represents the output of a conventional support vector machine, and ω and γ represent two parameters obtained by training the probabilistic support vector machine.
In another aspect, the present invention provides a finger vein image quality evaluation system based on a convolutional neural network, the evaluation system includes an evaluation unit and a database in communication with the evaluation unit, the database stores a finger vein grayscale image, and the evaluation unit includes:
the quality marking module is used for marking the quality of the finger vein gray level image to obtain a gray level image with a quality label, obtaining vein characteristics of the gray level image with the quality label, and coding to obtain a binary image;
the training sample set establishing module is used for respectively establishing a binary image training sample set and a gray image training sample set for the binary image and the gray image with the quality label obtained by the quality labeling module;
the model establishing module is used for respectively establishing a convolutional neural network model for extracting the depth characteristics of the binary image and the gray level image;
the convolution training module is used for dividing the binary image training sample set and the gray image training sample set established by the training sample set establishing module into different subsets, and inputting the subsets into convolution neural network models corresponding to the extracted binary image and gray image depth features respectively in batches for training;
the connection processing module is used for acquiring depth feature vectors of the gray level image and the binary image in the trained convolutional neural network model; and is used for connecting the two depth feature vectors to form a joint expression vector;
and the calculation module is used for inputting the joint expression vector into a support vector machine for training and calculating the quality of the predicted finger vein image.
In a further improvement, the quality labeling module comprises:
the image selection and feature extraction submodule is used for selecting a gray image from a plurality of images of the same finger and extracting features to obtain a binary image;
the registration template image selection submodule is used for calculating the average distance between the image selected by the image selection and feature extraction submodule and the rest images of the same finger, selecting the image corresponding to the minimum average distance as a registration template image, and taking other images as test images;
the calculation submodule is used for calculating the distance between each test image of the same finger and the registered template image of the same finger to obtain an intra-class matching score and calculating the distance between the registered template images to obtain an inter-class matching score; calculating an error acceptance rate FAR and an error rejection rate FRR according to the intra-class matching score and the inter-class matching score;
the judgment submodule is used for judging whether the false acceptance rate FAR is equal to a preset threshold value or not, and sending a classification instruction to the classification submodule if the false acceptance rate FAR is equal to the threshold value;
the classification submodule is used for classifying the image marked with the error rejection and the image marked with the correct acceptance and sending a marking instruction to the marking submodule;
and the labeling submodule is used for performing quality labeling on the image labeled with the error rejection or the image labeled with the correct acceptance and setting a corresponding quality label.
Compared with the prior art, the invention has the beneficial effects that: the finger vein image quality evaluation method and the evaluation system based on the convolutional neural network can greatly improve the precision of finger vein image quality evaluation and improve the identification performance of an authentication system, and compared with other finger vein image quality evaluation methods, the finger vein image quality evaluation method and the evaluation system based on the convolutional neural network have the following beneficial effects:
1. the finger vein image quality evaluation method and system based on the convolutional neural network can automatically label the finger vein gray level image, so that heavy work and errors caused by manual labeling are reduced.
2. The finger vein image quality evaluation method and system based on the convolutional neural network provided by the invention firstly fuse the depth characteristics of the finger vein binary image and the gray level image to realize the quality evaluation of the finger vein image.
3. Compared with the traditional convolutional neural network model, the convolutional neural network model adopted by the invention is different in that: firstly, for all convolutional layers, calculating and inputting the maximum value of corresponding position elements among the characteristic spectrums of the layer as the characteristic spectrum of the layer and inputting the maximum value into an activation function; secondly, the average value of the previous p maximum values in the local area of the characteristic image is calculated in all the pooling layers, and the input characteristic image is sampled.
4. The method utilizes the probability support vector to fuse the depth features and predict the quality of the finger vein image, thereby effectively improving the precision of image quality evaluation.
5. The finger vein image quality evaluation method and system based on the convolutional neural network are not only suitable for quality evaluation of finger vein images, but also can be applied to quality evaluation of other biological characteristic images.
Drawings
FIG. 1 is a flowchart of a finger vein image quality evaluation method based on a convolutional neural network in embodiment 1;
FIG. 2 is a schematic diagram of a convolutional neural network model structure for extracting depth features of a grayscale image according to embodiment 4;
FIG. 3 is a schematic diagram of a convolutional neural network model structure for extracting depth features of a binary image in embodiment 4;
FIG. 4 is a block diagram showing the structure of a finger vein image quality evaluation system based on a convolutional neural network in accordance with embodiment 6;
FIG. 5 is a block diagram of a quality labeling module according to embodiment 7.
Detailed Description
Example 1
A finger vein image quality evaluation method based on a convolutional neural network, as shown in fig. 1, the method includes the following steps:
s1: marking the quality of the finger vein gray level image in the database to obtain a gray level image with a quality label, obtaining a low-quality gray level image and a high-quality gray level image, obtaining vein features of the gray level image with the quality label, and coding to obtain a binary image;
s2: establishing a binary image training sample set with a quality label obtained in the step S1;
s3: establishing a gray level image training sample set with a quality label in the step S1;
s4: extracting a convolution neural network model of the depth features of the gray level image; the convolutional neural network model includes: the device comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a third convolution layer, a first full-connection layer, a second full-connection layer and an output layer;
s5: extracting a convolutional neural network model of the binary image depth features; the convolutional neural network model includes: the device comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a first full-connection layer, a second full-connection layer and an output layer;
s6: training of convolutional neural network models
Initializing each layer of filter by using a random number obeying Gaussian distribution, wherein the initial value of the offset is an arbitrary constant; training the convolutional neural network by adopting a random gradient descent method; dividing the binary image training sample set established in the step S2 and the gray level image training sample set established in the step S3 into different subsets, inputting the subsets into the convolutional neural network model applied in the step S5 and the step S4, calculating a gradient and performing backward propagation to update the filter weight and the offset after the images of all the batches are subjected to forward propagation in the convolutional neural network model once, and searching the optimal solution of the filter and the offset through repeated iteration;
s7: after the training is finished, inputting the predicted finger vein image into the convolutional neural network model in the steps S4 and S5, and selecting the output of the second full-connection layer in the convolutional neural network model in the steps S4 and S5 as the depth feature vector of inputting a gray image and a binary image; connecting the two depth characteristic vectors to form a joint expression vector of the input prediction finger vein image;
s8: the joint expression vector formed in step S7 is input to a support vector machine for training, and the quality of the predicted finger vein image is calculated using a probabilistic support vector machine.
The finger vein image quality evaluation method based on the convolutional neural network can greatly improve the precision of finger vein image quality evaluation and improve the identification performance of an authentication system.
Example 2
The method for evaluating the quality of the finger vein image based on the convolutional neural network is different from the embodiment 1, and the specific method for marking the quality of the finger vein gray level image in the database comprises the following steps:
s11: selection of enrollment template images
Selecting any one image of one finger, extracting and matching two finger vein images by utilizing a mature recognition algorithm method, and calculating the average distance between the image and the rest images; selecting an image corresponding to the minimum average distance as a registration template image of the finger, and taking other images as test images;
s12: image quality annotation
Calculating the distance between each test image of the same finger and the registered template image of the same finger to obtain an in-class matching score; calculating the distance between the registered template images to obtain the matching score between classes; calculating an error acceptance rate FAR and an error rejection rate FRR according to the intra-class matching score and the inter-class matching score; a threshold is preset as a security level of the system, and when the false acceptance rate FAR is equal to the threshold, if FAR is 0.1%, a low-quality image or a high-quality image is distinguished according to whether the image is rejected by the system error or the image is accepted correctly.
The finger vein image in the sample set provided by the invention is derived from the database http:// www4.comp. polyc. edu. hk/. csajaykr/fvdatabase. htm of hong Kong physical university. The database contains 3132 images of the finger veins of 156 people; the data acquisition is divided into two phases, each finger providing 6 image samples, each person providing two fingers, so each person provides 24 images in both phases; wherein the first 105 people provided 2520 images in two acquisition phases; the remaining 51 individuals participated in the second stage of image acquisition for a total of 612 images. Since it is more practical to capture images in two stages, the present invention has been described using only 2520 images captured from 105 persons 105 x 2 fingers x 6 images x 2 stages as an example.
The specific method for labeling the quality of the finger vein image is illustrated as follows:
first step vein feature extraction and matching
1.1 vein feature extraction: mainly uses the following Gabor wavelets to enhance the image
In the formula, pn=[x,y]TIndicating the coordinate axes in the horizontal and vertical directions, p0=[x0,y0]TIs the distance, ω, from the originmIs the mean ratio, C is a 2 x 2 positive definite covarianceThe matrix, |, represents a dot product operation. By coordinate transformationAndgabor filters that can go to different sides, among themθnIs a rotation angle and is discretized in the K direction asWherein q is 1,2,., K (K8).
Enhancing finger vein features by the following equation;
whereinRepresents Gθ(mean of x, y).) denotes convolution, and f (x, y) is a finger vein image.
Further enhancement of venous features using morphological operations;
andindicating that the image is grey-scaled expanded and eroded by the structuring element b.
Then, coding the characteristic image Z by using the following equation to obtain a binary image;
the matching of vein features is mainly realized by the following method:
1.2 matching of vein features:
let R and T represent the m × n binary registered image and the test image, respectively; obtaining a template image by expanding RThe template obtained, for example, by expanding its length and width to 2w + m and 2h + n is represented as follows:
the matching score between R and T is calculated as follows:
wherein w and h represent the distance moved in the horizontal and vertical directions;
Φ is defined as follows:
second step selection of registered template images
There were 210 fingers in the database, and there were 12 finger vein images for each finger. All images are extracted to vein feature binary images through the previous method. Then, any binary image is selected and the average distance between the image and the rest images is calculated by the matching algorithm. This operation is repeated and the average distance of the other images is calculated. And finally, selecting the image corresponding to the minimum average distance as a template image of the finger, and taking other images as test images. Thus, there are 210 enrollment templates and 2310 test images in the data.
Third step of image quality annotation
And calculating the distance between each test image and the template image of the same finger according to a matching algorithm to generate a 2320-class internal matching score. Accordingly, an 21945 inter-class match score may be obtained by calculating the distance between the templates. Calculating false acceptance rate FAR and False Rejection Rate (FRR) according to the inter-class matching score and the intra-class matching score; under the condition of a higher safety level, FAR is equal to 0.1%, 0.1% is a preset value, the image rejected by the system error is marked as a low-quality image, and the label of the image is set to be 0; images that are properly accepted by the system are labeled as high quality images with a label set to 1.
Compared with other finger vein image quality evaluation methods, the finger vein image quality evaluation method based on the convolutional neural network can automatically label the finger vein image, so that heavy work and errors caused by manual labeling are reduced.
Example 3
A finger vein image quality evaluation method based on a convolutional neural network, which is different from embodiment 1 in that the specific method for establishing the binary image training sample set with the quality label obtained in step S1 in step S2 is as follows: after all the test images are labeled according to the labeling method of the step S1, 1155 images of 105 fingers are selected as training images, and the rest images are used as test images; in the training set, there are a total of 101 low quality images and 1054 high quality images. In the test set, high qualityAnd 110 and 1045 images, respectively. Because the low quality in the training set is less than the high quality samples, various types of imbalance are caused; to overcome this problem, a low quality image is produced using the following method; for example, to produce a composite sample of low quality image x, first two low quality images x are optionally selected from the training set1And x2. Then, using equation yl=x1+rand(0,1)(x2-x1) (L1, 2.., L) a temporary image sample is generated. Finally, by equation pl=x1+rand(0,1)(yl-x) (l ═ 1, 2.., K) the synthesized new samples were calculated. According to this method 953 synthetic images can be produced, resulting in a total of 1054 low quality images in the training set.
The specific method for establishing the gray scale image training sample set with the quality label in step S1 in step S3 is as follows: since each binary image corresponds to a gray image, a gray image training sample set based on the gray image can be obtained according to the previous binary image training sample set.
Example 4
A method for evaluating quality of finger vein image based on convolutional neural network, the method is different from embodiment 1 in that a convolutional neural network model of depth features of gray level image is extracted in step S4, as shown in fig. 2, in the first convolutional layer, the second convolutional layer or the third convolutional layer, the feature image of the l layerEach element in the feature map is equal to the maximum value of corresponding positions in all feature maps of the previous layer, and the feature image of each element in the feature map isCalculated according to the following formula:
wherein,is the input spectrum of the l-th layer,is the convolution kernel between the mth input and the n output feature spectra, is the convolution operation, Ml-1Is the number of input feature spectra,is the shift of the nth output spectrum.
The first convolution layer, the second convolution layer or the third convolution layer uses a modified linear unit as an excitation function, which is defined as follows:
in the formula,representing the output spectrum of the l-th layer.
The first pooling layer and the second pooling layer divide output characteristic spectrums of the first convolution layer and the second convolution layer into non-overlapping regions, and the average value of the first p maximum values in each region is selected as a representative value of the region to sample the output of the first convolution layer or the second convolution layer; let IkRepresenting the output spectrum after convolution with the kth convolution kernel,is shown as pair IkAll elements in the middle sxs local regionIs carried out from large to largeThe set obtained after the small sorting is carried out, wherein T is s multiplied by s to represent the number of elements; to IkOutput characteristics obtained after samplingCalculated according to the following formula:
half of the neurons in the first and second fully-connected layers are randomly released using a discarding method.
In the output layer, predicting the probability of N-2 classes by utilizing a softmax function;
wherein,output x of the last hidden layermLinear combinations of (3).
The convolutional neural network model for extracting the binary image depth feature in step S5 is shown in fig. 3.
Compared with the traditional convolutional neural network model, the convolutional neural network model adopted by the invention is different in that: firstly, for all convolutional layers, calculating and inputting the maximum value of corresponding position elements among the characteristic spectrums of the layer as the characteristic spectrum of the layer and inputting the maximum value into an activation function; secondly, the average value of the previous p maximum values in the local area of the characteristic image is calculated in all the pooling layers, and the input characteristic image is sampled.
Example 5
A finger vein image quality evaluation method based on a convolutional neural network is different from the embodiment 4 in that the training method of the convolutional neural network model specifically comprises the following steps:
① initializing each layer of filter with random numbers obeying Gaussian distribution, setting the initial value of offset as arbitrary constant, and training the convolution neural network model shown in step S4 and step S5 by using a random gradient descent method.
1, ② for an image F, its quality label is q ∈ {0,1}, where 0 represents low quality image, 1 represents high quality image, and the training set is represented as { (F)1,q1),(F2,q2),…,(FN,qN) }; dividing the training data set into different subsets, and inputting the subsets into the convolutional neural network models applied in the steps S4 and S5; after all batches of images have been propagated forward once through the network, the gradient is calculated and the backward propagation is performed to update the filter weights wkAnd offset bk(ii) a For example: the filter weight w of the step ggThe update rule of (1) is:
wg+1=Δg+1+wg
where Δ represents momentum, λ is the learning rate,a gradient of wg.
③, the optimal solution of the filter and the offset is found by repeated iteration, when the precision meets the requirement, the iteration is stopped, thereby completing the training of the deep neural network model.
The specific method of step S7 is: after the training is completed, the output layer of the convolutional neural network is removed, and when a gray image is input into the convolutional neural network model of step S4, the second fully-connected layer will output a depth featureVector quantity; the vector is the depth expression of the input gray level image; when a binary image is input into the convolutional neural network model of step S5, the second fully connected layer outputs a depth feature vector of a two-straight image; suppose v1And v2Respectively being a gray level image and the depth characteristic vector of the corresponding binary image; forming a joint expression vector v ═ v [ v ] for an input image by connecting two depth feature vectors1v2]Then, inputting the vector into a support vector machine for training;
the specific method of step S8 is: quality assessment model based on support vector machine:
in the image quality evaluation set based on the support vector machine, the probability support vector machine is used to predict the image quality. The definition of the probability support vector machine used is as follows: training a probability support vector machine by combining a depth feature vector v and a quality label q E {0,1}, wherein the output probability value is p
ξ (v) represents the output of the traditional support vector machine, and ω and γ represent two parameters obtained by the training of the probability support vector machine.
The method utilizes the probability support vector to fuse the depth features and predict the quality of the finger vein image, thereby effectively improving the precision of image quality evaluation.
Example 6
A finger vein image quality evaluation system based on a convolutional neural network, as shown in fig. 4, the evaluation system includes an evaluation unit 1 and a database 2 in communication with the evaluation unit 1, a finger vein grayscale image is stored in the database 2, and the evaluation unit 1 includes:
the quality labeling module 11 is configured to label the quality of the finger vein grayscale image, obtain a grayscale image with a quality label, obtain vein features of the grayscale image with the quality label, and encode the vein features to obtain a binary image;
a training sample set establishing module 12, configured to respectively establish a binary image training sample set and a gray image training sample set for the binary image and the gray image with the quality label obtained by the quality labeling module 11;
the model establishing module 13 is used for respectively establishing a convolutional neural network model for extracting depth characteristics of the binary image and the gray level image;
the convolution training module 14 is configured to divide the binary image training sample set and the grayscale image training sample set established by the training sample set establishing module 12 into different subsets, and input the subsets into convolution neural network models corresponding to the extracted binary image and grayscale image depth features for training;
the connection processing module 15 is used for acquiring depth feature vectors of the gray level image and the binary image in the trained convolutional neural network model; and is used for connecting the two depth feature vectors to form a joint expression vector;
and the calculating module 16 is used for inputting the joint expression vector into a support vector machine for training, and calculating the quality of the predicted finger vein image.
The finger vein image quality evaluation system based on the convolutional neural network can greatly improve the precision of finger vein image quality evaluation and improve the identification performance of an authentication system, and compared with other finger vein image quality evaluation methods, the finger vein image quality evaluation method based on the convolutional neural network firstly integrates the depth characteristics of a finger vein binary image and a gray level image to realize the quality evaluation of the finger vein image.
Example 7
A finger vein image quality evaluation system based on a convolutional neural network, which is different from the evaluation system in embodiment 6, as shown in fig. 5, the quality labeling module 11 includes:
the image selection and feature extraction sub-module 110 is used for selecting a gray image from a plurality of images of the same finger and extracting features to obtain a binary image;
the registration template image selection sub-module 111 is used for calculating the average distance between the image selected by the image selection and feature extraction sub-module 110 and the rest images of the same finger, selecting the image corresponding to the minimum average distance as a registration template image, and taking other images as test images;
the calculation submodule 112 is used for calculating the distance between each test image of the same finger and the registered template image of the same finger to obtain an intra-class matching score, and calculating the distance between the registered template images to obtain an inter-class matching score; calculating an error acceptance rate FAR and an error rejection rate FRR according to the intra-class matching score and the inter-class matching score;
a judging submodule 113, configured to judge whether the false acceptance rate FAR is equal to a preset threshold, and send a classification instruction to the classification submodule 114 if the false acceptance rate FAR is equal to the threshold;
a classification submodule 114, configured to classify the image labeled with the false rejection and the image labeled with the correct acceptance, and send a labeling instruction to the labeling submodule 115;
and the labeling submodule 115 is used for performing quality labeling on the image labeled with the false rejection or the image labeled with the correct acceptance, and setting a corresponding quality label.
Compared with other finger vein image quality evaluation systems, the finger vein image quality evaluation system based on the convolutional neural network can automatically label the finger vein image, so that heavy work and errors caused by manual labeling are reduced. The evaluation system provided by the invention can more accurately label the finger vein image, and improve the labeling quality.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and are not limited, and modifications or equivalent substitutions can be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention, and all the modifications and equivalents should be covered in the claims of the present invention.

Claims (5)

1. A finger vein image quality evaluation method based on a convolutional neural network is characterized by comprising the following steps:
s1: marking the quality of the finger vein gray level image in the database to obtain a gray level image with a quality label, obtaining vein features of the gray level image with the quality label, and coding to obtain a binary image;
s2: establishing a binary image training sample set with a quality label obtained in the step S1;
s3: establishing a gray level image training sample set with a quality label in the step S1;
s4: extracting a convolution neural network model of the depth features of the gray level image; the convolutional neural network model includes: the device comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a third convolution layer, a first full-connection layer, a second full-connection layer and an output layer;
s5: extracting a convolutional neural network model of the binary image depth features; the convolutional neural network model includes: the device comprises an input layer, a first convolution layer, a first pooling layer, a second convolution layer, a second pooling layer, a first full-connection layer, a second full-connection layer and an output layer;
the characteristic image of the first layer in the first convolution layer, the second convolution layer or the third convolution layerCalculated according to the following formula:
wherein,is the input spectrum of the l-th layer,is the convolution kernel between the mth input and the n output feature spectra, is the convolution operation, Ml-1Is the number of input feature spectra,is the shift of the nth output spectrum;
the first pooling layer and the second pooling layer divide output characteristic spectrums of the first convolution layer and the second convolution layer into non-overlapping regions, and the average value of the first p maximum values in each region is selected as a representative value of the region to sample the output of the first convolution layer or the second convolution layer; let IkRepresenting the output spectrum after convolution with the kth convolution kernel,is shown as pair IkAll elements in the middle sxs local regionThe collection is obtained after sorting from large to small, wherein T is more than or equal to 0 and less than T-1, m is more than or equal to 0 and less than s, and T is s multiplied by s and represents the number of elements; to IkOutput characteristics obtained after samplingCalculated according to the following formula:
wherein p is less than or equal to T;
s6: training of convolutional neural network models
Initializing each layer of filter by using a random number obeying Gaussian distribution, wherein the initial value of the offset is an arbitrary constant; training the convolutional neural network by adopting a random gradient descent method; dividing the binary image training sample set established in the step S2 and the gray level image training sample set established in the step S3 into different subsets, inputting the subsets into the convolutional neural network model applied in the step S5 and the step S4, calculating a gradient and performing backward propagation to update the filter weight and the offset after the images of all the batches are subjected to forward propagation in the convolutional neural network model once, and searching the optimal solution of the filter and the offset through repeated iteration;
s7: after the training is finished, inputting the predicted finger vein image into the convolutional neural network model in the steps S4 and S5, and selecting the output of the second full-connection layer in the convolutional neural network model in the steps S4 and S5 as the depth feature vector of inputting a gray image and a binary image; connecting the two depth characteristic vectors to form a joint expression vector of the input prediction finger vein image;
s8: the joint expression vector formed in step S7 is input to a support vector machine for training, and the quality of the predicted finger vein image is calculated using a probabilistic support vector machine.
2. The evaluation method according to claim 1, wherein the specific method for labeling the quality of the finger vein grayscale image in the database is as follows:
s11: selection of enrollment template images
Selecting any one image of one finger, extracting and matching two finger vein images by using a recognition algorithm method, and calculating the average distance between the image and the rest images; selecting an image corresponding to the minimum average distance as a registration template image of the finger, and taking other images as test images;
s12: image quality annotation
Calculating the distance between each test image of the same finger and the registered template image of the same finger to obtain an in-class matching score; calculating the distance between the registered template images to obtain the matching score between classes; calculating an error acceptance rate FAR and an error rejection rate FRR according to the intra-class matching score and the inter-class matching score; and presetting a threshold, and distinguishing the low-quality gray image or the high-quality gray image according to whether the system marks the image as the false rejection image or the image as the correct acceptance image when the acceptance rate FAR is equal to the preset threshold.
3. The evaluation method of claim 1, wherein the first convolutional layer, the second convolutional layer, or the third convolutional layer uses a modified linear unit as an excitation function, which is defined as follows:
wherein,representing the output spectrum of the l-th layer.
4. The method of claim 1, wherein the step g filter weights wgThe update rule of (1) is:
wg+1=Δg+1+wg
where Δ represents momentum, λ is the learning rate,is wgOf the gradient of (c).
5. The evaluation method according to claim 1, wherein the probabilistic support vector machine used is trained by combining the depth eigenvector V and its quality label q ∈ {0,1}, and its output probability value is p
ξ (v) represents the output of a conventional support vector machine, and ω and γ represent two parameters obtained by training the probabilistic support vector machine.
CN201610979315.4A 2016-11-07 2016-11-07 Finger vein image quality appraisal procedure based on convolutional neural networks Active CN106326886B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610979315.4A CN106326886B (en) 2016-11-07 2016-11-07 Finger vein image quality appraisal procedure based on convolutional neural networks

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610979315.4A CN106326886B (en) 2016-11-07 2016-11-07 Finger vein image quality appraisal procedure based on convolutional neural networks

Publications (2)

Publication Number Publication Date
CN106326886A CN106326886A (en) 2017-01-11
CN106326886B true CN106326886B (en) 2019-05-10

Family

ID=57816696

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610979315.4A Active CN106326886B (en) 2016-11-07 2016-11-07 Finger vein image quality appraisal procedure based on convolutional neural networks

Country Status (1)

Country Link
CN (1) CN106326886B (en)

Families Citing this family (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295475A (en) * 2015-05-29 2017-01-04 北京东方金指科技有限公司 A kind of based on sum of ranks than the registered fingerprint replacement method of method
CN106910192B (en) * 2017-03-06 2020-09-22 长沙全度影像科技有限公司 Image fusion effect evaluation method based on convolutional neural network
CN106920224B (en) * 2017-03-06 2019-11-05 长沙全度影像科技有限公司 A method of assessment stitching image clarity
CN106920215B (en) * 2017-03-06 2020-03-27 长沙全度影像科技有限公司 Method for detecting registration effect of panoramic image
WO2018187953A1 (en) * 2017-04-12 2018-10-18 邹霞 Facial recognition method based on neural network
CN108305240B (en) * 2017-05-22 2020-04-28 腾讯科技(深圳)有限公司 Image quality detection method and device
CN107273927B (en) * 2017-06-13 2020-09-22 西北工业大学 Unsupervised field adaptive classification method based on inter-class matching
CN107657209B (en) * 2017-07-07 2020-07-28 杭州电子科技大学 Template image registration mechanism based on finger vein image quality
CN107644415B (en) * 2017-09-08 2019-02-22 众安信息技术服务有限公司 A kind of text image method for evaluating quality and equipment
CN107705299B (en) * 2017-09-25 2021-05-14 安徽睿极智能科技有限公司 Image quality classification method based on multi-attribute features
CN107967442A (en) * 2017-09-30 2018-04-27 广州智慧城市发展研究院 A kind of finger vein identification method and system based on unsupervised learning and deep layer network
CN107832684B (en) * 2017-10-26 2021-08-03 通华科技(大连)有限公司 Intelligent vein authentication method and system with autonomous learning capability
CN107895144A (en) * 2017-10-27 2018-04-10 重庆工商大学 A kind of finger vein image anti-counterfeiting discrimination method and device
CN108010015A (en) * 2017-11-07 2018-05-08 深圳市金城保密技术有限公司 One kind refers to vein video quality evaluation method and its system
CN108053407B (en) * 2017-12-22 2021-04-13 联想(北京)有限公司 Data processing method and data processing system
JP2019121054A (en) * 2017-12-28 2019-07-22 株式会社東海理化電機製作所 Fingerprint authentication device
CN108288027B (en) * 2017-12-28 2020-10-27 新智数字科技有限公司 Image quality detection method, device and equipment
CN108764292B (en) * 2018-04-27 2022-03-18 北京大学 Deep learning image target mapping and positioning method based on weak supervision information
CN109360183B (en) * 2018-08-20 2021-05-11 中国电子进出口有限公司 Face image quality evaluation method and system based on convolutional neural network
CN109360633B (en) * 2018-09-04 2022-08-30 北京市商汤科技开发有限公司 Medical image processing method and device, processing equipment and storage medium
CN109409226B (en) * 2018-09-25 2022-04-08 五邑大学 Finger vein image quality evaluation method and device based on cascade optimization CNN
CN109272499B (en) * 2018-09-25 2020-10-09 西安电子科技大学 Non-reference image quality evaluation method based on convolution self-coding network
CN109409227A (en) * 2018-09-25 2019-03-01 五邑大学 A kind of finger vena plot quality appraisal procedure and its device based on multichannel CNN
CN109523514A (en) * 2018-10-18 2019-03-26 西安电子科技大学 To the batch imaging quality assessment method of Inverse Synthetic Aperture Radar ISAR
CN109409314A (en) * 2018-11-07 2019-03-01 济南浪潮高新科技投资发展有限公司 A kind of finger vein identification method and system based on enhancing network
CN109270384B (en) * 2018-11-13 2019-06-11 中南民族大学 A kind of method and system of the electric arc of electrical equipment for identification
CN109214376A (en) * 2018-11-22 2019-01-15 济南浪潮高新科技投资发展有限公司 A kind of fingerprint identification method and device based on depth stratification
CN109829887B (en) * 2018-12-26 2022-07-01 南瑞集团有限公司 Image quality evaluation method based on deep neural network
CN109934114B (en) * 2019-02-15 2023-05-12 重庆工商大学 Finger vein template generation and updating algorithm and system
CN109978840A (en) * 2019-03-11 2019-07-05 太原理工大学 A kind of method of discrimination of the quality containing texture image based on convolutional neural networks
CN110058943B (en) * 2019-04-12 2021-09-21 三星(中国)半导体有限公司 Memory optimization method and device for electronic device
CN110400335B (en) * 2019-07-25 2022-05-24 广西科技大学 Texture image quality estimation method based on deep learning
CN110532908B (en) * 2019-08-16 2023-01-17 中国民航大学 Finger vein image scattering removal method based on convolutional neural network
CN110674824A (en) * 2019-09-26 2020-01-10 五邑大学 Finger vein segmentation method and device based on R2U-Net and storage medium
CN110751105B (en) * 2019-10-22 2022-04-08 珠海格力电器股份有限公司 Finger image acquisition method and device and storage medium
CN111223082B (en) * 2020-01-03 2023-06-20 圣点世纪科技股份有限公司 Quantitative evaluation method for finger vein image quality
CN111210426B (en) * 2020-01-15 2021-03-02 浙江大学 Image quality scoring method based on non-limiting standard template
CN111709906A (en) * 2020-04-13 2020-09-25 北京深睿博联科技有限责任公司 Medical image quality evaluation method and device
CN111539306B (en) * 2020-04-21 2021-07-06 中南大学 Remote sensing image building identification method based on activation expression replaceability
CN111612083B (en) * 2020-05-26 2023-05-12 济南博观智能科技有限公司 Finger vein recognition method, device and equipment
CN112288010B (en) * 2020-10-30 2022-05-13 黑龙江大学 Finger vein image quality evaluation method based on network learning
CN112200156B (en) * 2020-11-30 2021-04-30 四川圣点世纪科技有限公司 Vein recognition model training method and device based on clustering assistance
CN114539586B (en) * 2022-04-27 2022-07-19 河南银金达新材料股份有限公司 Surface treatment production and detection process of polymer film
CN118116036A (en) * 2024-01-25 2024-05-31 重庆工商大学 Finger vein image feature extraction and coding method based on deep reinforcement learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866486A (en) * 2010-06-11 2010-10-20 哈尔滨工程大学 Finger vein image quality judging method
CN103544705A (en) * 2013-10-25 2014-01-29 华南理工大学 Image quality testing method based on deep convolutional neural network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016054779A1 (en) * 2014-10-09 2016-04-14 Microsoft Technology Licensing, Llc Spatial pyramid pooling networks for image processing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101866486A (en) * 2010-06-11 2010-10-20 哈尔滨工程大学 Finger vein image quality judging method
CN103544705A (en) * 2013-10-25 2014-01-29 华南理工大学 Image quality testing method based on deep convolutional neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Finger-Vein Quality Assessment by Representation Learning from Binary Images;Huafeng Qin et al.;《International Conference on Neural Information Processing 》;20151112;第31-32页第3.1节,第38页第3.3节,第41页第3.3.4节
ImageNet Classification with Deep Convolutional Neural Networks;Krizhevsky A et al.;《Advances in neural information processing systems. 2012》;20121208;第6页第5节
手指静脉图像质量评估与特征识别算法研究;秦华锋;《中国博士学位论文全文数据库 信息科技辑》;20130515;第423-427页第2-3节,表2-3,图1-2

Also Published As

Publication number Publication date
CN106326886A (en) 2017-01-11

Similar Documents

Publication Publication Date Title
CN106326886B (en) Finger vein image quality appraisal procedure based on convolutional neural networks
CN106529468B (en) A kind of finger vein identification method and system based on convolutional neural networks
CN108615010B (en) Facial expression recognition method based on parallel convolution neural network feature map fusion
CN107145842B (en) Face recognition method combining LBP characteristic graph and convolutional neural network
CN107194341B (en) Face recognition method and system based on fusion of Maxout multi-convolution neural network
CN107392082B (en) Small-area fingerprint comparison method based on deep learning
CN105138993A (en) Method and device for building face recognition model
CN106228142A (en) Face verification method based on convolutional neural networks and Bayesian decision
CN109002755B (en) Age estimation model construction method and estimation method based on face image
CN108564040B (en) Fingerprint activity detection method based on deep convolution characteristics
CN109145704B (en) Face portrait recognition method based on face attributes
CN110414587A (en) Depth convolutional neural networks training method and system based on progressive learning
CN109255339A (en) Classification method based on adaptive depth forest body gait energy diagram
Bureva et al. Generalized net model of biometric identification process
Kassem et al. An enhanced ATM security system using multimodal biometric strategy
Diarra et al. Study of deep learning methods for fingerprint recognition
Jha et al. Ubsegnet: Unified biometric region of interest segmentation network
Liu et al. A novel high-resolution fingerprint representation method
Kaur et al. Finger print recognition using genetic algorithm and neural network
CN112633400B (en) Shellfish classification and identification method and device based on computer vision
Cenys et al. Genetic algorithm based palm recognition method for biometric authentication systems
CN113705443A (en) Palm print image identification method comprehensively utilizing knowledge graph and depth residual error network
Assim et al. CNN and Genetic Algorithm for Finger Vein Recognition
Ibrahem et al. Age invariant face recognition model based on convolution neural network (CNN)
Alrikabi et al. Deep Learning-Based Face Detection and Recognition System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230328

Address after: 401329 2F, Building 18, Section I, Science Valley, Hangu Town, Jiulongpo District, Chongqing

Patentee after: Chongqing Financial Technology Research Institute

Patentee after: Qin Huafeng

Address before: 400067 No. 19, Xuefu Avenue, Nan'an District, Chongqing

Patentee before: CHONGQING TECHNOLOGY AND BUSINESS University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240606

Address after: Building 18, 2nd Floor, Section 1, Science Valley Phase 1, Hangu Town, Jiulongpo District, Chongqing, 400000

Patentee after: Chongqing Weimai Zhilian Technology Co.,Ltd.

Country or region after: China

Address before: 401329 2F, Building 18, Section I, Science Valley, Hangu Town, Jiulongpo District, Chongqing

Patentee before: Chongqing Financial Technology Research Institute

Country or region before: China

Patentee before: Qin Huafeng

TR01 Transfer of patent right