CN114332960A - Method for extracting and matching feature points of field fingerprint - Google Patents

Method for extracting and matching feature points of field fingerprint Download PDF

Info

Publication number
CN114332960A
CN114332960A CN202111683766.0A CN202111683766A CN114332960A CN 114332960 A CN114332960 A CN 114332960A CN 202111683766 A CN202111683766 A CN 202111683766A CN 114332960 A CN114332960 A CN 114332960A
Authority
CN
China
Prior art keywords
fingerprint
fingerprint image
detected
filter bank
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111683766.0A
Other languages
Chinese (zh)
Other versions
CN114332960B (en
Inventor
刘波
郝婧煜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN202111683766.0A priority Critical patent/CN114332960B/en
Publication of CN114332960A publication Critical patent/CN114332960A/en
Application granted granted Critical
Publication of CN114332960B publication Critical patent/CN114332960B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Collating Specific Patterns (AREA)

Abstract

The invention relates to a method for extracting and matching feature points of a field fingerprint, which is used for improving the fingerprint identification rate and reducing the labor cost. The method mainly comprises the steps of preprocessing a fingerprint image by using a deep neural network, solving a constraint problem to be met by constructing a filter bank by using a Split Bregman algorithm when extracting fingerprint features, training the filter bank with sparsity by using a fingerprint database for training as the algorithm input, applying the preprocessed fingerprint image to the filter bank to obtain feature points of the fingerprint image, and matching the fingerprint image with a database of known fingerprint images. The method provided by the invention is subjected to experimental verification on different fingerprint data sets, and the experimental result shows that the method provided by the invention has a certain effect on low-quality field fingerprint pictures.

Description

Method for extracting and matching feature points of field fingerprint
Technical Field
The invention belongs to the field of computer vision, and relates to identity recognition by using physical and behavioral characteristics of a human body. Identification of identity by fingerprints is one type of biometric identification.
Background
The fingerprint is composed of ridges and valleys, and the lines protruding from the fingerprint are generally called ridges, and the lines recessed therefrom are called valleys. One difficulty of fingerprint identification is the extraction of fingerprint features, which can be studied from global features and local features, where the global features include a pattern (ring, bow, spiral), a pattern region (including a region of the pattern features), a core point (located at the progressive center of a fingerprint pattern), a triangle point (located at the first bifurcation point or breakpoint from the core point, or the convergence of two patterns, an isolated point, a turning point, or points to these singular points), and the number of patterns (ridge density); local features refer to features of fingerprint nodes. The fingerprint is not continuous, smooth and straight, and often has branches, folds or breaks. These intersections, inflection points, or breakpoints are referred to as "minutiae".
Fingerprint identification can be divided into two forms, 1: 1 match and 1: and N is matched. In the following steps of 1: in 1 matching, when one person carries out fingerprint sampling, the same fingerprint is input for 1 time, and the fingerprint is only compared once during fingerprint identification, and then whether the fingerprint is judged. Such as cell phone fingerprint unlocking. And 1: n-matching refers to comparing a person's captured fingerprint against all fingerprints present in the database. Often in the process of finding criminals in fingerprint libraries.
The fingerprint of the scene refers to the fingerprint which is left in the scene and collected by police and other personnel. After the fingerprints are collected, fingerprint experts are required to mark fingerprint features firstly, and then comparison is carried out in a fingerprint database, so that the most similar fingerprints are found. The labor and time costs are increased in performing field fingerprinting by the need for specialist manual marking of minutiae. With the development of machine learning, many methods have been developed to extract fingerprint features directly. However, when the quality of the live fingerprint image is poor, if only the minutiae of the fingerprint are used as the method for extracting the features, the effect of extracting the features is difficult to achieve. The invention aims to provide a method for automatically extracting features by using machine learning, reducing the labor cost and improving the feature extraction capability for fingerprint identification based on a method for combining local features and overall features of fingerprints. Compared with the traditional method for extracting the minutiae of the fingerprint, the method has the advantage that the speed and the accuracy of extracting the characteristic points of the fingerprint are better than those of the traditional method, so the method has certain theoretical and practical significance on the method for extracting and matching the characteristic points of the fingerprint on site.
Disclosure of Invention
The invention aims to solve the problems that: on-site fingerprint extraction may have errors when using a traditional machine learning method to extract minutiae due to poor image quality, so that an expert is required to perform artificial fingerprint minutiae extraction. The invention provides a characteristic extraction method for extracting channel peak values by training a filter bank aiming at the problem. According to the method, the fingerprint image is trained in a blocking mode to obtain a filter bank, and the filter bank can learn fingerprint characteristics in different directions. After preprocessing the live fingerprint image, obtaining a multidimensional tensor through a trained filter bank, wherein the tensor is the extraction of the live fingerprint image on fingerprint characteristics in different directions. And searching a peak value in a channel of the tensor, and solving the characteristic point of the fingerprint. And finally, obtaining a similarity score through a fingerprint matching method.
The invention is divided into three parts:
(1) and carrying out image preprocessing operation on the fingerprint image.
(2) And constructing a filter bank for extracting the fingerprint image features.
(3) Matching is performed on the fingerprint images.
The method provided by the invention has the following specific technical scheme:
1. preprocessing a fingerprint image to be detected by using a convolutional neural network; determining the fingerprint direction, segmenting the fingerprint area and enhancing the fingerprint image of the normalized fingerprint image to be detected by using a convolutional neural network;
2. constructing a sparse filter bank to extract fingerprint characteristics; the filter bank has m decomposition filters a1,a2,……,amThe filter bank is constructed by randomly cutting 1000 enhanced fingerprint images into 10000 cut fingerprint images with 20 multiplied by 20 pixels, solving the constraint problem to be met by the filter bank construction by using the Split Bregman algorithm, and taking 10000 fingerprint cut images as the algorithm input to train the filter bank. Finally, a filter bank A is obtained; will be provided withStep 1, performing convolution operation on the fingerprint image to be detected and the filter bank A after enhancement to obtain a multi-dimensional coefficient, regarding each dimension of the multi-dimensional coefficient as a channel, finding a peak value in a block with the size of 3 multiplied by 3 in the channel, performing the operation on each channel, and comparing whether the peak values in different channels are the same or not. If the positions are the same, the positions are marked as candidate points, and the obtained peak value is taken as a characteristic point of the fingerprint in screening.
3. Matching fingerprint images; and calculating the similarity coefficient between the fingerprint image to be detected and the known fingerprint image in the fingerprint database. Determining a fingerprint image in a known fingerprint database matched with the fingerprint image to be detected according to the similarity coefficient;
the invention provides a method for extracting the characteristics of fingerprints without the help of a fingerprint expert under the condition of poor quality of fingerprint images. Different from the traditional method for extracting the minutiae of the fingerprint, the extraction of the characteristic points of the invention is to combine the detail characteristics (such as the ending or bifurcation of a ridge line and a valley line) of one pixel in the image with the overall characteristics (such as the characteristics between a core finger type and a triangular finger type) of the whole image to obtain the characteristic points which are used as the characteristic representation of the fingerprint image, and finally carry out the similarity calculation between the fingerprint images. The method avoids the problem that the result is wrong due to the fact that the position of the detail point mark is wrong in the traditional method. The fingerprint identification method of the invention improves the accuracy, the speed of characteristic extraction and the degree of automation compared with the traditional method.
Drawings
FIG. 1 is a flow chart of the method of the present invention
FIG. 2 is a flow chart of fingerprint image preprocessing
FIG. 3 is a diagram of a convolutional neural network structure for fingerprint direction and fingerprint segmentation
FIG. 4 is a flow chart of fingerprint matching
Detailed Description
FIG. 1 is an overall flow chart of the present invention, which is mainly divided into three parts, first, a pretreatment is performed on a live fingerprint image through a convolutional neural network to obtain a ridge direction of the fingerprint image, a fingerprint segmentation image and an enhanced fingerprint image; secondly, constructing a filter bank and extracting the characteristics of the fingerprint image, wherein the extracted characteristics are between the integral fingerprint characteristics and the detail fingerprint characteristics; finally, the fingerprint similarity is calculated by 1: n to perform the matching. The specific steps are as follows.
Step 1: determining fingerprint direction, segmenting ROI (region of interest) and enhancing images by using a convolutional neural network;
FIG. 2 is a flow chart of preprocessing a fingerprint image.
Firstly, the method for confirming the fingerprint direction is the prior art, and the specific process is as follows:
fig. 3 is a diagram of a structure of a convolutional neural network for segmenting a fingerprint in a fingerprint direction, where the network sequentially includes a first convolutional pooling block, a second convolutional pooling block, a third convolutional pooling block, then three parallel branches, and finally accumulates N-dimensional tensors obtained by the branches to obtain an output, where the first two convolutional pooling blocks sequentially include three convolutional pooling blocks and one pooling layer, and the third includes one convolutional pooling block and one pooling layer. Each convolution block comprises, in turn, a convolution layer, a BatchNorm layer, and a PReLU layer. Each branch is a cavity convolution block with different sampling rates (1 × 1, 4 × 4 and 8 × 8), and the cavity convolution block sequentially comprises a cavity convolution layer, two convolution layers and a sigmoid active layer;
and (4) obtaining the probability of N discrete angles of each pixel point after the normalized fingerprint image to be detected passes through the network. Wherein the prediction angle at the (x, y) pixel point is represented as an N-dimensional vector pori。pori(i) A ridge direction value representing the position at the i-th pixel is
Figure BDA0003442874460000031
The probability of (c).
Figure BDA0003442874460000032
Indicating a rounding down.
The calculation of the fingerprint orientation is specifically as follows:
Figure BDA0003442874460000033
wherein,
Figure BDA0003442874460000041
Figure BDA0003442874460000042
expressed as an arctan function for atan2(·);
Figure BDA0003442874460000043
expressed as the average ridge cosine direction value of the (x, y) pixel points;
Figure BDA0003442874460000044
the average ridge sine direction value expressed as (x, y) pixel points;
the computation of the convolutional neural network loss function consists of two parts, the first part being a weighted cross-entropy loss, as described below
Figure BDA0003442874460000045
ROI refers to the region of interest, p, of the fingerprint image to be detectedl*(x, y) and p*(i | (x, y)) are the inverse gaussian angle of the (x, y) pixel point label and the predicted inverse gaussian angle, respectively.
The other part is the loss of consistency of the fingerprint direction, so the loss is converted into a loss function for constraint, and the details are as follows:
Figure BDA0003442874460000046
wherein,
Figure BDA0003442874460000047
Figure BDA0003442874460000048
in the formula, J3Is a 3 x 3 matrix of all 1, where,
Figure BDA0003442874460000049
is the average ridge direction vector and is,
Figure BDA00034428744600000410
and
Figure BDA00034428744600000411
there is an explanation when the direction of the ridge of the fingerprint is calculated.
The upper part is the whole process of fingerprint image direction calculation, and the direction value representation in each pixel point area of the fingerprint image to be detected is obtained through calculation.
Secondly, for segmenting the foreground and background region parts of the fingerprint image, the prior art is adopted, and the specific process is as follows:
the convolution neural network structure of the fingerprint region segmentation is the same as the network structure of the fingerprint direction, the convolution neural network structure sequentially comprises three convolution pooling blocks and three parallel cavity convolution block branches with different sampling rates, and finally matrixes obtained by the branches are accumulated to obtain output. And inputting the normalized fingerprint image to be detected into the network to obtain the fraction of each pixel which becomes the foreground area. And marking the score greater than 0 as a foreground area, and marking the score less than or equal to 0 as a background area. Realizing the segmentation of a non-background area and a background area of a fingerprint image to be detected;
the loss function of fingerprint region partition is composed of two parts, one part is a weighted cross entropy loss function which is the same as the fingerprint direction weighted cross entropy loss.
Another part to make the segmentation smoother in an attempt to suppress the edge response is calculated as:
Figure BDA0003442874460000051
wherein I is the total image area, MssAs a segmentation score map, KlapThe laplacian edge detection kernel.
The whole process of fingerprint image segmentation calculation is divided into the upper part. Through the calculation, a matrix consisting of 0 and 1 is obtained, and the size of the matrix is the same as that of the fingerprint image to be detected. Wherein, a value of 0 indicates that the point is a non-fingerprint area, and a value of 1 indicates that the point is a fingerprint area.
Finally, for the enhanced part of the fingerprint image, the prior art Gabor filters is used for enhancement, and the direction of the fingerprint ridge line used by the technique is obtained by the step 1. And finally obtaining the enhanced fingerprint image.
Step 2: constructing sparse filter bank for fingerprint feature extraction
The constructed filter bank is required to satisfy sparse representation and image reconstruction, and can rapidly decompose and reconstruct images. The filter bank is trainable and can be trained aiming at specific data to obtain a better extraction result. Specifically, a three-dimensional matrix of r × r × m is initialized randomly as an initial filter bank, where r × r represents the size of the filter and m represents the number of filters; then randomly cutting 1000 training set enhanced fingerprint images into blocks, wherein 10 blocks with the size of 20 multiplied by 20 pixels are randomly taken from each image, and 10000 fingerprint block images are counted; and finally, solving a constraint problem to be met by filter bank construction by using a Split Bregman algorithm, and optimizing the filter bank by taking 10000 fingerprint block images as algorithm input. And finally training to obtain a filter bank A.
Firstly, the construction of the filter bank is obtained by the following formula:
Figure BDA0003442874460000052
wherein
Figure BDA0003442874460000053
Figure BDA0003442874460000054
Wherein the filter bank A is formed by N trained fingerprint images x1…NConstructed with m decomposition filters a1,a2,…,am
Figure BDA0003442874460000055
Expressed as a sparse induction function.
Figure BDA0003442874460000056
Representing a filter aiThe decomposition operator of (1). v. ofi,jRepresenting to-be-decomposed operators
Figure BDA0003442874460000057
Fingerprint image x for trainingjAnd obtaining one-dimensional coefficients in the multi-dimensional coefficients. Q refers to a filter that satisfies the perfect reconstruction condition,
Figure BDA0003442874460000058
represented as set a1,a2,…,am. Where M is a downsampled matrix, and does not involve pair aiM is an identity matrix in the case of downsampling. And if k is 0 then δ k1, otherwise δk0. det (-) denotes determinant calculation.
Figure BDA0003442874460000061
Expressed as a set of integers in the d dimension.
Specifically, the above calculation to construct filterbank A is to be performed on l1Norm as a sparse induction function
Figure BDA0003442874460000062
The solution problem for the filter bank a finally translates into an optimization problem with constraints:
Figure BDA0003442874460000063
wherein A isTA=I
The optimization problem is carried out by a Split Bregman algorithm, and the calculation is as follows:
Figure BDA0003442874460000064
wherein D ═ WA,P=A,PTP=I
Wherein | · | purple sweet1,1Is represented by1The process of the regularization is carried out,
Figure BDA0003442874460000065
expressed as multidimensional coefficients obtained after decomposition of the training fingerprint image. A. theTA ═ I denotes that a is an orthogonal matrix, i.e. the filter bank a satisfies orthogonality. For the Split Bregman algorithm, an equality constraint optimization problem is converted into an unconstrained optimization problem. The optimal result is obtained by iterating the three-dimensional matrix D, A, P with the size of r × r × m, namely D is regarded as an independent variable and all other variables are regarded as constants for the iteration of D. And when the difference value between the D obtained in the last iteration and the D obtained in the current iteration is less than 0.01, obtaining the optimal result. And solving the algorithm to finally obtain the filter bank A.
Secondly, extracting the characteristics of the fingerprint to be detected, comprising the following steps:
the filter bank a is constructed by the above method. And (3) performing convolution operation on the fingerprint image to be detected enhanced in the step (1) and each filter in the constructed filter bank A to obtain a group of multidimensional coefficients. For each dimension of the multidimensional coefficients, called a channel, a peak is found within a block of size 3 x 3 regions within the channel, and the above operation is performed for each channel by comparing whether the peak positions in different channels are the same. And if the positions are the same and are all larger than the set threshold value mu, marking as a candidate point. Considering whether the candidate point is located in a foreground region or a background region of the fingerprint image through the fingerprint segmentation region obtained in the step 1, if the obtained candidate point is located in the foreground region of the fingerprint image, identifying the candidate point as a characteristic point, and if the candidate point is located in the background region, not identifying the candidate point; through the extraction of the key feature points in the fingerprint to be detected, n key feature points are extracted in total, and each feature point is represented by an x coordinate and a y coordinate.
By the method, the fingerprint characteristics can be extracted quickly, and the problem of error results caused by error of the detail point mark positions in the traditional method is solved.
And step 3: matching fingerprint characteristic points;
FIG. 4 is a flow chart of fingerprint matching. The fingerprints to be detected collected on site are compared with the fingerprints in the fingerprint database one by one to calculate the similarity score, and the fingerprints matched with the fingerprints to be detected are found out from the database, so the method is called 'one-to-many matching'. The similarity score is a score obtained by matching and calculating the fingerprint to be detected and each fingerprint in the fingerprint database, and the higher the similarity score is, the higher the matching possibility of the fingerprint to be detected and each fingerprint in the fingerprint database is. The fingerprint database comprises fingerprint images A, B, etc., wherein the known fingerprint database is used for matching with the fingerprint image to be detected through the characteristic points obtained in the steps 1 and 2.
Specifically, the fingerprint image to be detected and the fingerprint image in the fingerprint database are respectively recorded as dec through the feature point sets of the fingerprints obtained in step 21And dec2
Wherein the feature point set dec1And dec2The matching method of (2) is obtained by calculating the Euclidean distance. The specific matching method is to dec1One feature point d in1And dec2Calculating Euclidean distance of each fingerprint feature point; then, the calculated distance is nearest dec2The characteristic point in (1) is denoted as d2And dec1If d passes through each feature point in (c)2Calculated point and d1The same represents a matching point. To dec1Each feature point in (1) is associated with dec2And (6) performing calculation. The similarity is the total number of the matching points and the characteristic points (dec) of the fingerprint image to be detected and the known fingerprint image1And dec2) Percentage of the total number.
For the fingerprint image to be detected, each known fingerprint in the fingerprint database needs to be compared, so that the fingerprint image with the highest similarity score is obtained in the fingerprint database and serves as a matching result.
And 4, step 4: evaluation of Experimental procedures and results
The data set used for fingerprint matching is NIST Special Database 4, and the image size is 512 x 512 pixels; selecting 200 fingerprint images from the data set, and dividing the obtained data set into 100 classes; each class contains two fingerprints of the same finger of the same experimenter, which have certain differences in angle, impact depth and fingerprint quality. Marking the data set, and marking two identical fingerprint images as the same person; the method mainly comprises the three steps of marking the selected data set, wherein one type is fingerprint images of a fingerprint printed by pressing, namely a known database, and the other type is a field fingerprint, namely a fingerprint to be detected, and the same fingerprint of the same person is marked with the same marking serial number. And then randomly selecting one fingerprint from the fingerprints in the field to carry out the operation, and carrying out similarity calculation on the fingerprint image to be detected obtained in the field and the image in the known fingerprint database to obtain a similarity ranking. The metric of the present invention uses the ranking of similarity. Ranking results are classified into rank5, rank10, rank15 and rank 20. That is, the correct matching result of the live unknown fingerprint image in the top 5 similarity ranks after matching with the fingerprint library is rank 5. Through the evaluation result, the accuracy rate of the matched result which can be found in 100 classes by the unknown fingerprint is shown in the table:
similarity ranking Rank5 Rank10 Rank15 Rank20
Method A 18% 25% 34% 42%
Method B 26% 33% 37% 51%
Method for producing a composite material 46% 62% 70% 80%
The method A uses a traditional fingerprint extraction method, and the fingerprint image is subjected to image enhancement, mean value filtering binarization, thinning and detail node finding operation, and finally the result is obtained by using a traditional fingerprint matching method. The method B uses the result obtained by the MCC matching method in matching based on the method a. Therefore, under the condition of poor fingerprint quality, the problems can be better solved by using the feature extraction method and the matching method based on the field fingerprint image provided by the invention.

Claims (4)

1. The method for extracting and matching the feature points of the field fingerprint is characterized by comprising the following steps of:
(1) preprocessing a fingerprint image to be detected by using a convolutional neural network, specifically: determining the fingerprint direction, segmenting the fingerprint area and enhancing the fingerprint image of the normalized fingerprint image by using a convolutional neural network;
(2) constructing a sparse filter bank to extract fingerprint features, specifically: solving the constraint problem to be met by constructing the filter bank by using a Split Bregman algorithm, training the filter bank by using a fingerprint data set for training, wherein the trained filter bank has m decomposition filters a1,a2,……,am(ii) a A group of multidimensional coefficients with tensor characteristics, which are obtained by passing the fingerprint image to be detected enhanced in the step 1 through the filter bank, are subjected to screening to obtain key feature points of the fingerprint;
(3) fingerprint image matching, specifically: and calculating the similarity scores of the fingerprint image to be detected and the known fingerprint images in the fingerprint database, and determining the fingerprint image in the fingerprint database matched with the fingerprint image to be detected according to the similarity scores.
2. The method for feature point extraction and matching of live fingerprints according to claim 1, further comprising the step of step 1:
firstly, the fingerprint direction confirmation method specifically comprises the following processes:
the convolutional neural network for extracting the fingerprint direction in the step 1 sequentially comprises a first convolutional pooling block, a second convolutional pooling block, a third convolutional pooling block, then three parallel branches, and finally, accumulating the N-dimensional tensors obtained by the branches to obtain an output, wherein the first two convolutional pooling blocks sequentially comprise three convolutional blocks and a pooling layer, the third convolutional block comprises a convolutional layer, a BatchNorm layer and a PReLU layer, and each convolutional block sequentially comprises a convolutional layer, a BatchNorm layer and a PReLU layer; each branch is a cavity convolution block with different sampling rates, and the cavity convolution block sequentially comprises a cavity convolution layer, two convolution layers and a sigmoid active layer;
normalizing the fingerprint image to be detected, obtaining the probability of N discrete angles of each pixel through the network, and expressing the prediction angle at the (x, y) pixel point as an N-dimensional vector poriI element pori(i) Expressed as the ridge line square of the positionThe vector value is
Figure FDA0003442874450000011
The probability of (a) of (b) being,
Figure FDA0003442874450000013
represents rounding down;
the calculation of the fingerprint orientation is specifically as follows:
Figure FDA0003442874450000012
wherein,
Figure FDA0003442874450000021
Figure FDA0003442874450000022
atan2 (. cndot.) is expressed as an arctan function;
Figure FDA0003442874450000023
expressed as the average ridge cosine direction value of the (x, y) pixel;
Figure FDA0003442874450000024
expressed as the average ridge sine direction value of the (x, y) pixel;
secondly, the specific process of segmenting the fingerprint area is as follows:
the convolutional neural network structure of the fingerprint region segmentation in the step 1 is the same as the network structure of the fingerprint direction, and sequentially comprises three convolutional pooling blocks and three parallel cavity convolutional block branches with different sampling rates, and finally, matrixes obtained by the branches are accumulated to obtain output. Inputting the normalized fingerprint image to be detected into the network to obtain the score of each pixel which becomes a foreground region, recording the score greater than 0 as the foreground region, and recording the score less than or equal to 0 as the background region. Realizing the segmentation of a non-background area and a background area of a fingerprint image to be detected;
and finally, enhancing the enhanced part of the fingerprint image by using the Gabor filters in the prior art, wherein the direction of the fingerprint ridge line used by the technique is obtained by the step 1, and finally obtaining the enhanced fingerprint image.
3. The method for feature point extraction and matching of live fingerprints according to claim 1, further comprising the step of step 2:
2.1), the process of constructing the sparse filter bank is as follows:
firstly, randomly initializing a three-dimensional matrix of r multiplied by m as an initial filter bank, wherein r multiplied by r represents the size of a filter, and m represents the number of the filters; secondly, randomly cutting 1000 training set enhanced fingerprint images into blocks, wherein 10 blocks of each image are randomly selected, the size of each image is 20 multiplied by 20 pixels, and 10000 fingerprint block images are counted; and finally, solving a constraint problem to be met by filter bank construction by using a Split Bregman algorithm, and optimizing the filter bank by taking 10000 fingerprint block images as algorithm input. Finally training to obtain a filter bank A;
the construction of the filter bank needs to satisfy the following constraints:
Figure FDA0003442874450000025
wherein
Figure FDA0003442874450000026
Figure FDA0003442874450000027
Wherein, the filter bank consists of N training set fingerprint images x1…NConstructed with m decomposition filters a1,a2,…,am
Figure FDA0003442874450000031
Expressed as a sparse induction function.
Figure FDA0003442874450000032
Representing a filter aiThe decomposition operator of (1). v. ofi,jRepresenting to-be-decomposed operators
Figure FDA0003442874450000033
Fingerprint image x for trainingjOne of the resulting multidimensional coefficients, Q, refers to a filter that satisfies perfect reconstruction conditions,
Figure FDA0003442874450000034
represented as set a1,a2,…,amWhere M is a downsampled matrix, not involving pairs of aiM is an identity matrix in the case of downsampling, and δ if k is 0k,1, otherwise δk,0, det (·) denotes determinant calculation,
Figure FDA0003442874450000035
expressed as a set of integers of d dimension, the above formula is solved using the Split Bregman algorithm to obtain a filter bank.
2.2) extracting the characteristics of the fingerprint to be detected, comprising the following steps:
constructing a filter bank A by the method, and performing convolution operation on the fingerprint image to be detected enhanced in the step 1 and each filter in the constructed filter bank A respectively to obtain a group of multidimensional coefficients; for each dimension of the multidimensional coefficient called as a channel, finding a peak value in a block with the size of a 3 multiplied by 3 area in the channel, performing the above operation on each channel, and comparing whether the peak values in different channels are the same or not; if the positions are the same and are all larger than the set threshold value mu, marking as a candidate point; considering whether the candidate point is located in a foreground region or a background region of the fingerprint image through the fingerprint segmentation region obtained in the step 1, if the obtained candidate point is located in the foreground region of the fingerprint image, identifying the candidate point as a characteristic point, and if the candidate point is located in the background region, not identifying the candidate point; the identified feature points are key feature points.
4. The method for extracting and matching feature points of live fingerprints according to claim 1, further comprising the following steps of calculating the similarity in step 3:
the method comprises the steps of 1 and 2, obtaining a feature point set of a fingerprint by the aid of the fingerprint image to be detected and a known fingerprint image, calculating Euclidean distances between one feature point A of the fingerprint image to be detected and each feature point in the known fingerprint image, calculating the Euclidean distance between the feature point A with the shortest European distance in the known fingerprint and each feature point in the fingerprint to be detected again, representing a matching point if the feature point A with the shortest European distance in the fingerprint to be detected is the same as the feature point A of the fingerprint to be detected, traversing all feature points in the fingerprint image to be detected to obtain all matching points, and obtaining the similarity, namely the percentage of the total number of the matching points to the total number of the feature points of the fingerprint image to be detected and the known fingerprint image.
The matching fingerprint confirmation method is as follows:
and comparing the fingerprint image to be detected with each known fingerprint in the fingerprint library, wherein the matching result is the fingerprint image with the highest similarity score in the fingerprint library.
CN202111683766.0A 2021-12-29 2021-12-29 Method for extracting and matching characteristic points of field fingerprints Active CN114332960B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111683766.0A CN114332960B (en) 2021-12-29 2021-12-29 Method for extracting and matching characteristic points of field fingerprints

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111683766.0A CN114332960B (en) 2021-12-29 2021-12-29 Method for extracting and matching characteristic points of field fingerprints

Publications (2)

Publication Number Publication Date
CN114332960A true CN114332960A (en) 2022-04-12
CN114332960B CN114332960B (en) 2024-06-14

Family

ID=81022777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111683766.0A Active CN114332960B (en) 2021-12-29 2021-12-29 Method for extracting and matching characteristic points of field fingerprints

Country Status (1)

Country Link
CN (1) CN114332960B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1818927A (en) * 2006-03-23 2006-08-16 北京中控科技发展有限公司 Fingerprint identifying method and system
US20180357467A1 (en) * 2017-06-08 2018-12-13 Moqi Inc. System and Method for Fingerprint Recognition
CN111915618A (en) * 2020-06-02 2020-11-10 华南理工大学 Example segmentation algorithm and computing device based on peak response enhancement
CN111951252A (en) * 2020-08-17 2020-11-17 中国科学院苏州生物医学工程技术研究所 Multi-sequence image processing method, electronic device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1818927A (en) * 2006-03-23 2006-08-16 北京中控科技发展有限公司 Fingerprint identifying method and system
US20180357467A1 (en) * 2017-06-08 2018-12-13 Moqi Inc. System and Method for Fingerprint Recognition
CN111915618A (en) * 2020-06-02 2020-11-10 华南理工大学 Example segmentation algorithm and computing device based on peak response enhancement
CN111951252A (en) * 2020-08-17 2020-11-17 中国科学院苏州生物医学工程技术研究所 Multi-sequence image processing method, electronic device and storage medium

Also Published As

Publication number Publication date
CN114332960B (en) 2024-06-14

Similar Documents

Publication Publication Date Title
Jain et al. Intelligent biometric techniques in fingerprint and face recognition
Halici et al. Introduction to fingerprint recognition
EP2174261B1 (en) Fingerprint matching method and apparatus
Zaeri Minutiae-based fingerprint extraction and recognition
JP7130905B2 (en) Fast and Robust Dermatoglyphic Mark Minutia Extraction Using Feedforward Convolutional Neural Networks
CN105138974B (en) A kind of multi-modal Feature fusion of finger based on Gabor coding
Kumar et al. A robust fingerprint matching system using orientation features
Naderi et al. Fusing iris, palmprint and fingerprint in a multi-biometric recognition system
Win et al. Texture feature based fingerprint recognition for low quality images
Song et al. Fingerprint indexing based on pyramid deep convolutional feature
Khodadoust et al. Partial fingerprint identification for large databases
CN109523484B (en) Fractal feature-based finger vein network repair method
Chen et al. A finger vein recognition algorithm based on deep learning
Nachar et al. Hybrid minutiae and edge corners feature points for increased fingerprint recognition performance
Fang et al. Deep belief network based finger vein recognition using histograms of uniform local binary patterns of curvature gray images
Cappelli et al. The state of the art in fingerprint classification
Girgis et al. A robust method for partial deformed fingerprints verification using genetic algorithm
CN114332960B (en) Method for extracting and matching characteristic points of field fingerprints
CN114913610A (en) Multi-mode identification method based on fingerprints and finger veins
Nozaripour et al. Image classification via convolutional sparse coding
Santosh et al. Recent Trends in Image Processing and Pattern Recognition: Third International Conference, RTIP2R 2020, Aurangabad, India, January 3–4, 2020, Revised Selected Papers, Part I
Hariprasath et al. Bimodal biometric pattern recognition system based on fusion of iris and palmprint using multi-resolution approach
AlShemmary et al. Siamese Network-Based Palm Print Recognition
Kuban et al. A NOVEL MODIFICATION OF SURF ALGORITHM FOR FINGERPRINT MATCHING.
Turroni Fingerprint Recognition: Enhancement, Feature Extraction and Automatic Evaluation of Algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant