CN114677552A - Fingerprint detail database labeling method and system for deep learning - Google Patents

Fingerprint detail database labeling method and system for deep learning Download PDF

Info

Publication number
CN114677552A
CN114677552A CN202111386111.7A CN202111386111A CN114677552A CN 114677552 A CN114677552 A CN 114677552A CN 202111386111 A CN202111386111 A CN 202111386111A CN 114677552 A CN114677552 A CN 114677552A
Authority
CN
China
Prior art keywords
fingerprint
image
deep learning
gray
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111386111.7A
Other languages
Chinese (zh)
Inventor
郑世宝
赵洪田
王玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN202111386111.7A priority Critical patent/CN114677552A/en
Publication of CN114677552A publication Critical patent/CN114677552A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention provides a fingerprint detail database labeling method and system for deep learning, which comprises the following steps: fingerprint interest area segmentation and mask generation based on gray variance; a normalization transform based on image intensity mean and variance; fingerprint direction estimation based on a gradient method; frequency estimation based on the directional window and the ridge line gray level projection signal; fingerprint image enhancement based on Gabor filtering and direction field and frequency field; fingerprint binarization and refinement based on image processing morphological operation; fingerprint feature extraction based on minutia prior definition; and the annotator simply checks and amends through common knowledge. The software developed by the method has the advantages of high detection speed, capability of effectively reducing the workload of marking personnel, friendly interface, simple operation and the like, and is more suitable for the actual situation of fingerprint data marking.

Description

Fingerprint detail database labeling method and system for deep learning
Technical Field
The invention relates to the technical field of image processing and pattern recognition, in particular to a fingerprint detail database labeling method and system for deep learning.
Background
With the rapid development of computing architecture, deep learning and mobile chip technology, fingerprint identification has become the most widely applied biometric identification technology in daily life due to its portability, security and uniqueness. The automatic fingerprint identification technology plays an important role in the fields of judicial authentication, entrance guard and attendance checking, entry and exit management and mobile phone payment. Generally, fingerprint identification is divided into an enrollment phase and an identification and authentication phase. The fingerprint registration stage generally relates to the steps of fingerprint image acquisition, foreground image segmentation, normalization, direction field/frequency field estimation, enhancement, binaryzation, refinement and the like; the fingerprint identification and authentication stage generally includes feature point extraction and pairing processes, i.e., finding out the corresponding relationship between minutiae in the registered fingerprint and minutiae in the query fingerprint. The fingerprint detail extraction is the most core and basic step in fingerprint identification, the accurate fingerprint detail extraction is the premise of subsequent matching and authentication, and the link can be summarized into a classical and complex pattern identification problem and is also one of the challenging parts in an identification system.
Patent document CN112269817A (application number: CN202011331925.6) discloses a deep learning sample labeling method based on big data, which includes: receiving user annotation input related to a first set of sample objects in a sample library; training a preference prediction model comprising a weight vector comprising a weighted value for each of a plurality of features associated with a sample library, the sample library comprising a first set of sample objects presented to a user, the weighted value for each feature being trained using received user annotation input; selecting a second set of sample objects to be provided to the user, the second set of sample objects providing more a priori knowledge gained from the user annotation input relative to other unidentified sample objects in the sample library; and pushing a preset number of preference objects to be provided to the user according to the trained preference prediction model.
In recent years, deep learning has been widely used in pattern recognition and computer vision tasks, especially in object detection, showing impressive results. Fingerprint minutiae detection can be generalized to one of the small object detection tasks. The deep learning technique is a data-driven technique, and in recent years, due to the introduction of privacy protection policy, many important fingerprint data sets (such as NIST SD27) are disabled, so that the integration of generating effective fingerprint detail data into the primary task of promoting the application of the deep learning technique in the fingerprint detail extraction problem is also one of the problems to be solved.
Disclosure of Invention
In view of the defects in the prior art, the invention aims to provide a fingerprint detail database labeling method and system for deep learning.
The fingerprint detail database labeling method for deep learning provided by the invention comprises the following steps:
step 1: inputting an original fingerprint image I, performing fingerprint interest region segmentation based on gray variance, and generating a mask;
and 2, step: normalizing the original fingerprint image I;
and 3, step 3: estimating the fingerprint direction based on a gradient method;
and 4, step 4: performing frequency estimation based on the mask, the directional window and the ridge gray projection;
And 5: fingerprint image enhancement is carried out based on Gabor filtering, the direction field and the frequency field;
and 6: carrying out binarization on the enhanced image;
and 7: thinning the binarized image, and deleting edge pixels of the lines in the binarized image to obtain a line skeleton image with the width of a unit pixel;
and 8: extracting detail points of the image after the thinning operation based on prior definition to obtain a feature point set M;
and step 9: based on the feature point set M, deleting the pseudo feature points and marking the feature points which do not meet the preset requirement, and obtaining the final feature point set Mf
Preferably, the step 1 comprises: dividing an input original fingerprint image I into image blocks with preset sizes, then solving the standard deviation of each image block, and replacing the values I of all pixels in the area with the standard deviation valuesstdIs standard deviation ofSetting a threshold sthresh when IstdMore than or equal to sthresh, indicating the fingerprint foreground area, otherwise indicating the fingerprint background area, and generating a mask IM
Preferably, the step 2 comprises: the mean (mean), (I) and the variance Var (I) of the image I are calculated first, and then the mean and the variance Var (I) are calculated by
Figure RE-GDA0003478539970000021
Transformation, normalizing the image pixel values to a form of 0 and 1 in mean and variance, respectively, denoted as I N1Subsequently combined with a mask IMCalculating the mean and variance of the foreground of the fingerprint by using N1(IN1⊙IM) Normalizing the fingerprint foreground image to obtain IN2
Preferably, the step 3 comprises:
firstly, dividing a fingerprint image I according to a square with the size of w multiplied by w, establishing a predefined filter F, and obtaining gradient components F of the F in the horizontal and vertical directionsx,FyThrough Fx,FyActing on the fingerprint image I to obtain a gradient image on an x axis and a y axis, and evaluating the direction of each point on the ridge line by finding the main direction of gradient change;
the covariance of the gradient of each ridge point is convolved with a low-pass filter, and then the double angle of the gradient is solved and converted into a continuous vector field: sine and cosine, and then angle smoothing using low pass filtering: sine and cosine values, and finally obtaining the fingerprint ridge line direction through arc tangent.
Preferably, the step 4 comprises:
the method comprises the steps of carrying out blocking processing on an image, dividing the image into w multiplied by w image blocks, carrying out rotation operation on each image block, rotating the blocks to enable the x axis of a coordinate to be perpendicular to the ridge line direction, enabling the y axis to be parallel to the ridge line direction, and establishing a coordinate system based on the rotation;
cutting a module which does not meet the preset requirement in the image block after rotation, projecting the gray value of the image in the direction window to the x axis, counting the projection sum proj of the gray value of the pixel in the direction window on the x axis, carrying out two-dimensional sequence statistic filtering operation on the proj, taking a filtering window [1, windows ], taking the value of the largest pixel in the window as the result of the position, and recording the result after filtering as mpts;
And inquiring a value corresponding to proj ═ mpts, recording the value as the coordinate position of the projected peak point midx set, taking the average single peak distance between the maximum coordinate and the minimum coordinate in midx as the average inter-ridge distance, recording the average single peak distance as wavelength, and finally calculating the frequency freq as the reciprocal of wavelength.
Preferably, the step 5 comprises:
generating Gabor filters with preset quantity, applying the Gabor filters with corresponding directions and frequencies to different image blocks for filtering enhancement, and specifically operating as follows:
screening and counting the frequency map, selecting frequencies greater than zero, and rounding the frequencies;
the angle increment AngleEnc is fixed when the filter is generated, and the number of the filters is as follows: anglenum 180 degree/Angleinc, then carrying out interpolation processing to obtain corresponding Gabor filtering template, corresponding to each image block, using filter with corresponding indexed angle and frequency to operate, and obtaining enhanced image IE
Preferably, the step 6 includes: converting images with different gray levels into binary images, increasing the contrast of image ridges and a background, neglecting ridge brightness, and unifying the gray level of ridges; setting the gray value of the streak line pixel to be 0 and setting the background pixel value to be 255;
The step 7 comprises the following steps: performing preset n times of iterative morphological processing on the binary image until the image does not change any more; the negative influence of the false end point generated by the edge of the image is eliminated by performing truncation operation on the mask and acting on the refined image.
Preferably, the step 8 comprises:
according to the thinned fingerprint image, counting different pixel logarithms of every two adjacent pixels in eight neighborhood pixels of each pixel in the image, if the value is 7, indicating that the point is an end point, and if the value is 5, indicating that the pixel is a cross point;
an etching operation is applied to the mask to reduce false feature points of the edges and background.
Preferably, the step 9 includes:
based on the feature point set M, deleting pseudo feature points by using the distance between different feature points, and counting two feature points p1,p2Deleting redundancy through a preset threshold thresh to obtain a filtered feature point set Mp-s
For the filtered feature point set Mp-sAnd manually editing, namely correcting the characteristic points with labels not meeting the preset requirement by deleting the existing abnormal points, moving and translocating the abnormal points and adding unmarked characteristic points to obtain a final characteristic point set Mf
The fingerprint detail database annotation system for deep learning provided by the invention comprises:
Module M1: inputting an original fingerprint image I, performing fingerprint interest region segmentation based on gray variance, and generating a mask;
module M2: normalizing the original fingerprint image I;
module M3: estimating the fingerprint direction based on a gradient method;
module M4: performing frequency estimation based on the mask, the directional window and the ridge gray projection;
module M5: fingerprint image enhancement is carried out based on Gabor filtering, the direction field and the frequency field;
module M6: carrying out binarization on the enhanced image;
module M7: thinning the binarized image, and deleting edge pixels of the lines in the binarized image to obtain a line skeleton image with the width of a unit pixel;
module M8: extracting detail points of the image subjected to the thinning operation based on prior definition to obtain a feature point set M;
module M9: based on the feature point set M, deleting the pseudo feature points and marking the feature points which do not meet the preset requirement to obtain a final feature point set Mf
Compared with the prior art, the invention has the following beneficial effects:
1) the invention provides a complete fingerprint data labeling algorithm for the first time, so that a more robust and universal fingerprint detail data set expansion method can be obtained;
2) According to the method, the prior knowledge of the fingerprint image morphology is used for image processing, most fingerprint minutiae can be well detected according to the characteristics of original data, the complexity is reduced, and meanwhile the consistency of the data is guaranteed;
3) the invention provides a typical fingerprint detail labeling data set based on the algorithm and the open source fingerprint image set, realizes the expansion of the fingerprint detail data set, increases the capacity of the fingerprint database, can effectively meet the training requirement of a deep learning method, and solves the problem of the loss of the current high-capacity fingerprint database;
4) the software developed by the method has the advantages of high detection speed, capability of effectively reducing the workload of marking personnel, friendly interface, simple operation and the like, and is more suitable for the actual situation of fingerprint data marking.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a flowchart illustrating a method for tagging fingerprint detail data according to an embodiment of the present invention;
FIGS. 2 a-2 n are schematic diagrams of output results of various steps according to the embodiment of the present invention;
FIG. 3 is an illustration of fingerprint frequency calculation according to an embodiment of the present invention;
FIG. 4a is a graph illustrating training loss variation during deep learning training using the annotation database according to an embodiment of the present invention; FIG. 4b shows the accuracy, recall and F of deep learning training using the annotation database in accordance with an embodiment of the present invention1A graph of the change in value; FIG. 4c is a P-R curve of the test results of the deep learning model on the test set obtained by training using the annotation database in the embodiment of the present invention;
fig. 5 is a visual sample of a test result of a deep learning model obtained by training using an annotation database on a test set according to an embodiment of the present invention, fig. 5a is an input fingerprint image and a mask, fig. 5b is a fingerprint image with annotation minutiae and a visual image showing only minutiae, respectively, and fig. 5c is a comparison graph of fingerprint minutiae detected by the deep learning model added in fig. 5 b;
fig. 6 is a visualization sample of a test result of a deep learning model obtained by training using an annotation database on a test set according to an embodiment of the present invention, fig. 6a is an input fingerprint picture and a mask, fig. 6b is a fingerprint image with annotation minutiae and a visualization image showing only minutiae, respectively, and fig. 6c is a comparison graph of fingerprint minutiae detected by the deep learning model added in fig. 6 b.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
Example 1:
the invention provides a fingerprint minutiae database generating and expanding method based on fingerprint morphology and experience knowledge, which is based on the existing image processing theory and provides a data generating algorithm suitable for marking fingerprint minutiae.
The invention provides a fingerprint minutiae database generation method for deep learning training, which sequentially uses the following operations for each input fingerprint image:
Generating a fingerprint ROI segmentation algorithm and Mask based on gray variance;
carrying out normalization processing based on the image intensity mean value and variance on the original fingerprint image;
fingerprint direction field estimation based on image gradient;
fingerprint frequency estimation based on a directional window and a projected sine wave;
fingerprint enhancement based on directional field, frequency field and Gabor filtering;
performing threshold segmentation based fingerprint binarization;
fingerprint refinement based on corrosion operation;
fingerprint feature point detection based on minutiae definition;
redundant point deletion based on minutiae distances;
and manually correcting the wrongly marked detail points based on the prior knowledge.
Optionally, the method for generating a fingerprint minutiae mark further includes:
ROI extraction and Mask generation based on fingerprint gray intensity variance;
the ROI extraction and Mask generation based on the gray intensity variance comprises the following steps: dividing original fingerprint image into fingerprint blocks of blksze × blksze size, calculating standard deviation of gray intensity in each fingerprint block, and using standard deviation I of gray intensitystdSetting a division threshold thresh of a foreground and a background of the fingerprint image instead of the gray value of each pixel in the fingerprint block; if IstdIf the module is more than or equal to thresh, the module is indicated as a foreground (ROI), otherwise, the module is a background; setting the foreground value to 255 and the background value to 0, obtaining Mask, and recording as I mask
Optionally, performing normalization processing on the original fingerprint image based on the image intensity mean value and the variance includes:
firstly, normalizing the intensity of the original fingerprint image to the condition that the mean value is 0 and the variance is 1 to obtain In01
Based on In01Obtaining intensity normalized I in combination with Maskn01nThen, the normalization operation is performed according to the following formula:
Figure RE-GDA0003478539970000061
obtaining a normalized image Inorm
Optionally, the fingerprint direction field estimation based on the gradient method includes:
first, the image gradient G is calculated by generating a series of Gaussian filtersx,Gy
Then, the local ridge line direction of each point is estimated by finding the main change direction of the image gradient, and the specific operation is as follows:
(1) computing covariance Gxx G for image gradientsx⊙Gx,Gxy=Gx⊙Gy,Gyy=Gy⊙Gy
(2) Using a Gaussian filter F of bk × bk to perform weighted filtering processing on Gxx, Gxy and Gyy:
Figure RE-GDA0003478539970000071
(3) calculating image principal direction according to Gxx, Gxy, Gyy
Figure RE-GDA0003478539970000072
Figure RE-GDA0003478539970000073
Figure RE-GDA0003478539970000074
(4) Due to the presence of noise, end points, intersections, and fingerprint breaks, etc., the present invention is directed to
Figure RE-GDA0003478539970000075
Performing weighted filtering operation, and performing operation by using the same filter as in (1) to obtain filtered
Figure RE-GDA0003478539970000076
The local direction at pixel (i, j) is then obtained by an arctangent operation
Figure RE-GDA0003478539970000077
Figure RE-GDA0003478539970000078
Figure RE-GDA0003478539970000079
Output of
Figure RE-GDA00034785399700000710
Optionally, the estimation algorithm includes:
(1) Dividing the fingerprint image into block windows with the size of w × w;
(2) the direction of the statistical block window at point (i, j) is noted as
Figure RE-GDA00034785399700000711
The calculation method is as follows:
Figure RE-GDA0003478539970000081
(3) according to block window direction
Figure RE-GDA0003478539970000087
Rotating the block window to be vertical to the ridge line direction (simultaneously establishing a rectangular coordinate system for the window, and taking the direction vertical to the ridge line as an x axis and the direction parallel to the ridge line as a y axis);
(4) cutting an unreasonable module after the image is rotated to avoid influencing the effect after the image is projected along the y axis;
(5) calculating the projection sum of the gray values of the ridge lines in the direction window along the y axis, and recording the sum as proj;
(6) performing two-dimensional sequence statistic filtering operation on the proj, wherein a filtering window [1, windows ] is taken, the value of the largest pixel in the window is taken as the value of the pixel, and the result after filtering is recorded as mpts;
(7) inquiring a value corresponding to proj ═ mpts, and recording the value as a projected peak point set midx;
(8) and (3) taking the distance between the average single peaks between the maximum value and the minimum coordinate in the midx, namely the distance between ridges, and recording the distance as wavelength, and finally solving the frequency freq (f) as the reciprocal of the wavelength.
Optionally, the fingerprint enhancement based on the direction field, the frequency field and the Gabor filtering further includes applying different filters to different texture regions of the fingerprint, applying a low-pass filter along the ridge line direction, filtering low-frequency noise, and filling small holes; for applying band-pass filtering along the direction of the vertical ridge line, highlighting boundary information and enhancing contrast, the method specifically comprises the following steps:
(1) Firstly, a series of Gabor filters are generated according to the set direction interval and the current image frequency, and the specific operation is as follows: a. screening and counting the frequency graph, reserving effective frequencies with the frequency being more than 0, reserving two decimal places approximately to reduce the number needing to be processed, and counting all frequencies of the image; b. different filters are generated by adopting a direction angle increment mode, deltaangle is set to be 3 degrees, and then 60 filters in different directions are generated by 180 degrees/3 degrees;
(2) using a two-dimensional even symmetric Gabor filter operator:
Figure RE-GDA0003478539970000082
wherein, the first and the second end of the pipe are connected with each other,
Figure RE-GDA0003478539970000083
f is the direction and direction of filtering at pixel (x, y)
Figure RE-GDA0003478539970000084
The frequency of the directional ridges is such that,
Figure RE-GDA0003478539970000085
is [ x, y ]]Rotate clockwise around the origin of the rectangular coordinate system
Figure RE-GDA0003478539970000086
And (4) obtaining. For each image block, calculating and selecting the most appropriate Gabor filter according to the formula, and performing filtering operation:
Figure RE-GDA0003478539970000091
optionally, the threshold segmentation-based fingerprint binarization includes setting a threshold for ridges and valleys, and then comparing each pixel in the enhanced image with the threshold, so that the divided foreground value is 0 and the background valley value is 255.
Optionally, the fingerprint refinement based on the corrosion operation further includes removing the fingerprint edge, and then refining the binary image by using the corrosion operation; the specific operation is as follows: firstly, eliminating peripheral fingerprint frames through mask, then performing morphological erosion operation on the extracted fingerprints, wherein the operation can be completed through bwmorph () function in MATLAB, and optimizing by using iteration method until the image is not changed any more, and marking the corroded image as I thin
Optionally, the minutiae-based defined fingerprint detection further includes: for IthinThe corroded Mask is firstly used for more accurately extracting the fingerprint foreground ridge part I from the imagethinRAnd then, according to the definition of the fingerprint ridge line minutiae, the position and the angle information of the minutiae are positioned. The specific implementation scheme is as follows: traverse IthinRFor each pixel P, count is counted, the number of pixels differing from P, and if count is 5, P is an intersection, and if count is 7, P is an end point, and the set is denoted as M.
Optionally, the distance-based redundant point deletion specifically includes removing pseudo-fine nodes by deleting false detection points of parts such as burrs and ridge line end points, and the process includes: sequentially traversing each point p in MkCounting the distance between the point and the traversed point
Figure RE-GDA0003478539970000092
If the minimum distanceIf the value is less than thresh, the point is retained, and the operation can delete a large number of redundant points, but it is noted that the operation also has the possibility of removing some real detail points, so that further correction is carried out according to empirical knowledge, and the characteristic point after the operation is marked as Mpost-processing
Optionally, the manually correcting the error labeled minutiae based on the minutiae prior knowledge mainly includes: the position of the detail point is corrected by checking the corresponding relation between the refined image and the detail point, and the method mainly comprises the following steps: deleting the detail node with the wrong label, moving the translocated detail node and adding the detail node without the label to correct the labeled detail node, and the corrected detail point set is marked as M final
The fingerprint detail database annotation system for deep learning provided by the invention comprises: module M1: inputting an original fingerprint image I, performing fingerprint interest region segmentation based on gray variance, and generating a mask; module M2: normalizing the original fingerprint image I; module M3: estimating the fingerprint direction based on a gradient method; module M4: performing frequency estimation based on the mask, the directional window and the ridge gray projection; module M5: fingerprint image enhancement is carried out based on Gabor filtering, the direction field and the frequency field; module M6: carrying out binarization on the enhanced image; module M7: thinning the binarized image, and deleting edge pixels of the lines in the binarized image to obtain a line skeleton image with the width of a unit pixel; module M8: extracting detail points of the image after the thinning operation based on prior definition to obtain a feature point set M; module M9: based on the feature point set M, deleting the pseudo feature points and marking the feature points which do not meet the preset requirement to obtain a final feature point set Mf
Example 2:
example 2 is a preferred example of example 1.
As one of the problems generally faced in the prior art: that is, due to the privacy protection law, the minutiae annotation data set currently used for training the deep learning fingerprint identification model is deficient, such as: conventional data sets such as NIST SD27(M.D. Garris and R.M. McCabe: NIST specific Database 27: finger print minor from and matching reagent images. national institutes of Standards & technologies, (2000)) have now been banned. When data is lack, a traditional method is usually relied on for fingerprint identification, and an identification model based on the traditional method is poor in identification accuracy and identification speed, and an identification result is not ideal. To this end, the invention proposes a method for labeling fingerprint minutiae for generating a data set for deep learning training. The invention uses an open source fingerprint image set NIST SD4(NIST specific database 4, Aug.27,2010 [ Online ]. Available: https:// www.nist.gov/srd/NIST-specific-database-4) as an annotation carrier (see FIG. 2a for two image samples in the image set), and carries out minutia annotation by using the method. Referring to fig. 1, which is a schematic flow chart of a method for labeling a fingerprint minutiae data set of the present invention, the method for labeling a fingerprint minutiae specifically includes:
S1: fingerprint ROI extraction and Mask generation based on image intensity variance;
s2: normalizing based on the mean value and the standard deviation of the fingerprint image intensity;
s3: fingerprint direction field estimation based on image gradient;
s4: fingerprint frequency estimation based on a directional window and a projected sine wave;
s5: fingerprint enhancement based on the directional field, the frequency field and a Gabor filter;
s6: performing threshold segmentation based fingerprint binarization;
s7: fingerprint image refinement based on morphological erosion operation;
s8: fingerprint minutiae detection based on minutiae definition;
s9: deleting pseudo feature points based on different detection point distances;
s10: fingerprint minutiae correction based on prior knowledge.
S1 is executed to perform image foreground and background segmentation by calculating image gray local block variance. The fingerprint segmentation algorithm based on the image intensity gray variance uses the basis that the fingerprint foreground area has larger gray variance due to the distribution condition of fingerprint ridges and valleys, the background area has smaller image gray change, and therefore the variance is smaller, and based on the basis, the fingerprint foreground and background can be distinguished by setting a variance threshold.
Specifically, the image is divided into small blocks w × w, and then variance statistics is performed on gray pixels I (I, j) in each small block:
Figure RE-GDA0003478539970000111
Then, a proper threshold is selected to judge whether each block of area belongs to the foreground or the background, so as to obtain an image Mask which is marked as IMaskSee fig. 2 b.
And S2, performing normalization processing based on the mean and variance of the fingerprint image intensity, aiming at solving the problem of excessively high or excessively low image gray value, converting all images into standard graphs with the same mean and variance, and providing uniform standard specifications for image training. Specifically, assuming an original fingerprint image I with a size of W × H, the mean and variance of the image intensity are calculated as follows:
Figure RE-GDA0003478539970000112
images were normalized to mean M by the following equation0Variance is Var0The gray scale interval of (2):
Figure RE-GDA0003478539970000113
in the invention, the ridge line part of the fingerprint image is further normalized, the mean value of the normalized image is 0, the variance is 1, and the specific operation is as follows:
Inorm=Inorm-Mean(Inorm⊙IMask)
Figure RE-GDA0003478539970000114
the normalized image is shown in fig. 2 c.
S3 is performed, fingerprint direction field estimation based on image gradients. Estimating the direction field representing the trend of the striae by calculating the gradient field for describing the intensity change intensity of the gray scale, firstly calculating the image gradient by the image I and the 7 multiplied by 7Gaussian gradient operator (G)x,Gy) And (3) convolution realization:
Figure RE-GDA0003478539970000115
Figure RE-GDA0003478539970000121
the direction field of the image at each point is then calculated (gradient covariance is first calculated):
Ixx=Ix⊙Ix,Ixy=Ix⊙Iy,Iyy=Iy⊙Iy
Convolution filtering (i.e., weighted summation, here using a 31 × 31Gaussian operator (G) operation, filtering operation) is performed on Ixx, Ixy, Iyy:
Figure RE-GDA0003478539970000122
solving the main direction of the image:
Figure RE-GDA0003478539970000123
sin2θ=Ixy./MI,cos2θ=(Ixx-Iyy)./MI
because the local direction field of the ridge line is estimated to be inaccurate due to the possible problems of noise, ridge line fracture, holes, singular value points and the like in the input image, low-pass filtering is applied along the ridge line to filter out low-frequency noise and fill small pores (here, a 31 × 31Gaussian operator (G)) to smooth the direction:
Figure RE-GDA0003478539970000124
finally, the local direction of the fingerprint is obtained through arc tangent
Figure RE-GDA0003478539970000125
Figure RE-GDA0003478539970000126
Figure RE-GDA0003478539970000127
Outputting fingerprint direction field
Figure RE-GDA0003478539970000128
The visualization effect is as in fig. 2 d.
S4 is performed to determine the frequency of fingerprint ridge variation through the local window of each pixel and the sine wave projection wavelength perpendicular to the ridge based on the directional window and the fingerprint image frequency estimate of the projected sine wave. According to the definition of the fingerprint local frequency: along the number of ridges within a single pixel length in the direction perpendicular to the ridges, the ridges (small image intensity) and the valleys (large image intensity) of the foreground area of the fingerprint image alternate, presenting a sine wave intensity distribution with the local frequency of the fingerprint expressed as the reciprocal of the wavelength of the sine wave, as shown in fig. 2 e.
In particular, for I norm(i, j) the frequency value calculation method in the local block divided by s X s, as shown in fig. 3, first defining a window with length V and width W, and then calculating the projection signal of the gray value of the ridge line of the direction window on the X coordinate axis:
Figure RE-GDA0003478539970000131
then, the average distance between the two peaks is calculated as the wavelength, and the frequency is the reciprocal of the wavelength:
Figure RE-GDA0003478539970000132
specifically, in the implementation, s is 36, W is 1, and L is 5. When MATLAB is used for implementation, the related key steps and functions are sequentially that the image blocks are rotated by using an error () function, the direction parallel to the ridge line is taken as an Y axis, the direction perpendicular to the ridge line is taken as an X axis, and a rectangular coordinate system is established; projecting the image gray value in the direction window to the x axis, and cutting an unreasonable module in the image block after rotation to avoid influencing the effect after projection along the y axis; counting the projection sum of the gray values in the direction window along the y axis by using a sum () function; two-dimensional order statistics filtering operation is performed using ordfilt2(), where the filtering window [1, winsize ] is taken]Taking the value of the maximum pixel in the window as the value of the pixel, and finding the position where the filtered value is equal to the original value, namely the peak value point of the projected pixel intensity; finally, the average distance between peak values is calculated as the wavelength of the point, then the frequency of the point is the reciprocal of the wavelength, and the fingerprint frequency field distribution I freqThe results are shown in FIG. 2 f.
Execution of S5, obtaining a normalized image I by S2normS3 and S4 steps obtain the image orientation field
Figure RE-GDA0003478539970000138
And a frequency field IfreqThen, based on Gabor filter pair InormAnd performing enhancement operation on the image. Specifically, the Gabor filter takes the local direction and frequency of the fingerprint as input parameters, and selects a corresponding filter operator according to the local texture characteristics, so as to enhance the fingerprint ridge line. Firstly, a Gabor filter operator is defined according to direction and frequency:
Figure RE-GDA0003478539970000133
wherein the content of the first and second substances,
Figure RE-GDA0003478539970000134
f is the direction and direction of filtering at pixel (x, y)
Figure RE-GDA0003478539970000135
The frequency of the direction of the light beam,
Figure RE-GDA0003478539970000136
is [ x, y ]]Rotate clockwise around the origin of the rectangular coordinate system
Figure RE-GDA0003478539970000137
And (4) obtaining. In the implementation process, the Gabor two-dimensional filter operator is split into two different directions, namely one-dimensional filtering in the ridge line direction and the vertical direction, and the filter operator is split into two orthogonal parts, namely:
Figure RE-GDA0003478539970000141
further instantiate, pair deltaXYF, carrying out assignment to obtain simplified templates in two directions, and generating a series of Gabor filters according to the angle increment, hV,hHSelecting a filtering template which is parallel to and vertical to the ridge line and is closest to the ridge line, and respectively filtering the fingerprint ridge line direction (filtering low-frequency noise and filling pores) and the ridge line orthogonal direction (improving the contrast of the ridge line and the valley line) to obtain an enhanced fingerprint image I enhThe fingerprint enhancement effect of this step is shown in fig. 2 g.
S6 is executed, fingerprint binarization based on threshold value segmentation is carried out, and enhanced fingerprint image I is obtained through Gabor filteringenhAnd then, uniformly processing the fingerprint ridges and the background gray value, setting the gray value of the ridges as 0, and setting the background gray value as 255. Specifically, firstly, a division threshold is set, the threshold can be set to be a gray value mean value or 0, when the gray value is larger than the threshold, the threshold is set to be 255, otherwise, the threshold is set to be 0, and the fingerprint image binaryzation I is achievedbinaryThe binarized fingerprint image is shown in fig. 2 h.
S7 is executed, the fingerprint image refinement based on morphological erosion operation is carried out, firstly, the fingerprint image edge is selected, the image edge is removed, the interference to the subsequent minutiae (end points) extraction is avoided, and after the processing, as shown in figure 2I, the fingerprint image I after binaryzation is obtainedbinaryIn practice, the fingerprint image is eroded by using the thinning operator, and the Mask is morphologically operated, so as to obtain the result shown in FIG. 2j (for assisting the next step (S8) from IthinAccurately extracting fingerprint details), and obtaining a refined image I until the fingerprint image form is not changed any more in the corrosion iteration processthinThe refined fingerprint image is shown in fig. 2 k.
Execution S8, fingerprint minutiae detection based on minutiae definition, from IthinDetecting fingerprint minutiae. Specifically, for IthinAnd (3) counting the gray value type of each pixel in the 8 neighborhoods of the pixel, counting the number of the pixels of different types of the pixel, if the number is 5, indicating that the pixel is a cross point, if the number is 7, indicating that the pixel is an end point, recording the detected minutiae set as M, and adding a detection minutiae point to the refined fingerprint image, wherein the step (l) is shown in fig. 2.
Executing S9, deleting the pseudo feature points based on the distance of the detection points, including deleting the redundant points based on the distance, and removing the pseudo feature points by deleting the pseudo feature points of the burrs, the ridge line end points and the like, wherein the operation is that each point p in M is traversed in sequencekCounting the distance between the point and the traversal point
Figure RE-GDA0003478539970000142
If the minimum distance is less than thresh, the point is retained, and this operation can delete a large number of redundant points, and it is noted that this operation may also remove some real feature points, so we will make further correction operation according to a priori knowledge, and the feature point after this operation is marked as Mpost-processingThe image with the detail point redundancy deleted is shown in fig. 2 m.
S10 is executed, and the set M of the detail points with the redundancies deleted is obtained through S9 post-processingAfter, the baseAnd correcting the minutiae of the empirical knowledge, and correcting the error annotation of the minutiae caused by noisy background noise and collected polluted parts in the fingerprint image. Specifically, the part is checked by the annotator for the part with the wrong annotation in the whole image and is corrected, for example, the characteristic points with the wrong annotation are deleted, the characteristic points with displacement between the existing and correct points are moved, the characteristic points which are not annotated are added, and the like. This part of the operation can be manually modified by the annotator via the MATLAB GUI interface, and the final output result is shown in fig. 2 n.
Finally, the manually corrected fingerprint minutiae labels are saved, and in order to meet the actual requirements of subsequent deep learning training, the same labeling format as that of the NIST SD27 data set is adopted. The annotation file is an mnt format file, the first line in the file is the file name, the second line is written with the number of minutiae N, the image width W and the image height H, and then the x coordinate and the y coordinate of the minutiae of the image, and the o direction.
Meanwhile, the method is used for marking the open source fingerprint image set NIST SD4 to generate a NIST SD4 fingerprint minutiae database. Next, the invention trains FingerNet network with labeled NIST SD4 fingerprint minutiae database, and the training parameters divide the whole data set into 3 according to the suggested parameters in the open source project: 1, training and testing are carried out, the number of training rounds is set to be 20 epochs, loss index change in the training process is shown in figure 4a, and precision, recycle and F of a test set are tested in the training process 1The score index variation is shown in FIG. 4b, and finally the threshold thresh e [0.00001,0.01,0.02,..,. 0.98,0.99, 0.99999. ] is detected for the model change minutiae obtained by the final training]Tests were performed to obtain precision and recall, respectively, and a P-R plot was made as shown in fig. 4 c. From FIGS. 4a and 4b, it can be seen that the training loss value decreases with the number of training rounds, and precision, recall and F1The score value is increased continuously, which shows that the model trained on the data set can gradually reach convergence, and when the model is used for testing on the test set, it can be seen that when the thresh setting is larger, the number of the detected data is smaller, the accuracy is higher, and the recall rate is lower; when thresh is set smaller, the number of detections is larger, the accuracy is low and the recall is recalledThe rate is high. To balance accuracy and recall simultaneously, F is calculated here1Score is used to select the balance point and select thresh corresponding to the highest value, where the test result is optimal when thresh is 0.75, and the accuracy and recall are 0.8891 and 0.8915, respectively. Fig. 5a, 5b, 5c, 6a, 6b, and 6c show two sets of corresponding qualitative and quantitative test results, respectively, which shows that the FingerNet model trained by the data set can accurately and completely detect fingerprint minutiae, and also shows that the quantitative and qualitative test results of the two sets of samples are substantially consistent with the overall quantitative test result. The experiments show that the fingerprint minutiae marking method provided by the invention can generate enough and effective fingerprint minutiae data sets used for deep learning training.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (10)

1. A fingerprint detail database labeling method for deep learning is characterized by comprising the following steps:
step 1: inputting an original fingerprint image I, performing fingerprint interest region segmentation based on gray variance, and generating a mask;
and 2, step: normalizing the original fingerprint image I;
and step 3: estimating the fingerprint direction based on a gradient method;
and 4, step 4: performing frequency estimation based on the mask, the directional window and the ridge gray projection;
and 5: fingerprint image enhancement is carried out based on Gabor filtering, the direction field and the frequency field;
step 6: carrying out binarization on the enhanced image;
and 7: thinning the binarized image, and deleting edge pixels of the lines in the binarized image to obtain a line skeleton image with the width of a unit pixel;
and 8: extracting detail points of the image after the thinning operation based on prior definition to obtain a feature point set M;
and step 9: based on the feature point set M, deleting the pseudo feature points and marking the feature points which do not meet the preset requirement, and obtaining the final feature point set Mf
2. The method for labeling the fingerprint detail database for deep learning according to claim 1, wherein the step 1 comprises: dividing an input original fingerprint image I into image blocks with preset sizes, then solving the standard deviation of each image block, and replacing the values of all pixels in the area with the standard deviation value to obtain I stdSetting a threshold sthresh for the standard deviation, when IstdMore than or equal to sthresh, indicating the fingerprint foreground area, otherwise indicating the fingerprint background area, and generating a mask IM
3. The method for labeling the fingerprint detail database for deep learning according to claim 1, wherein the step 2 comprises: the mean (mean), (I) and the variance Var (I) of the image I are calculated first, and then the mean and the variance Var (I) are calculated by
Figure FDA0003367163470000011
Transformation, normalizing the image pixel values to a form of 0 and 1 in mean and variance, respectively, denoted as IN1Subsequently combined with a mask IMCalculating the mean and variance of the foreground of the fingerprint by using N1(IN1 e IM) Normalizing the fingerprint foreground image to obtain IN2
4. The method for labeling the fingerprint detail database for deep learning according to claim 1, wherein the step 3 comprises:
firstly, dividing a fingerprint image I according to a square with the size of w multiplied by w, establishing a predefined filter F, and obtaining gradient components F of the F in the horizontal and vertical directionsx,FyThrough Fx,FyActing on the fingerprint image I to obtain a gradient image on an x axis and a y axis, and evaluating the direction of each point on the ridge line by finding the main direction of gradient change;
the covariance of the gradient of each ridge point is convolved with a low-pass filter, and then the double angle of the gradient is solved and converted into a continuous vector field: sine and cosine, and then angle smoothing using low pass filtering: sine and cosine values, and finally obtaining the fingerprint ridge line direction through arc tangent.
5. The method for labeling the fingerprint detail database for deep learning according to claim 1, wherein the step 4 comprises:
the method comprises the steps of carrying out blocking processing on an image, dividing the image into w multiplied by w image blocks, carrying out rotation operation on each image block, rotating the blocks to enable the x axis of a coordinate to be perpendicular to the ridge line direction, enabling the y axis to be parallel to the ridge line direction, and establishing a coordinate system based on the rotation;
cutting a module which does not meet the preset requirement in the image block after rotation, projecting the gray value of the image in the direction window to the x axis, counting the projection sum proj of the gray value of the pixel in the direction window on the x axis, carrying out two-dimensional sequence statistic filtering operation on the proj, taking a filtering window [1, windows ], taking the value of the largest pixel in the window as the result of the position, and recording the result after filtering as mpts;
and inquiring a value corresponding to proj ═ mpts, recording the value as the coordinate position of the projected peak point midx set, taking the average single peak distance between the maximum coordinate and the minimum coordinate in midx as the average inter-ridge distance, recording the average single peak distance as wavelength, and finally calculating the frequency freq as the reciprocal of wavelength.
6. The method for labeling fingerprint minutiae database for deep learning according to claim 1, wherein the step 5 comprises:
Generating a preset number of Gabor filters, and applying the Gabor filters with corresponding directions and frequencies to different image blocks for filtering enhancement, wherein the specific operations are as follows:
screening and counting the frequency map, selecting frequencies larger than zero, and rounding the frequencies;
when the filter is generated, the angle increment AngleEnc is fixed, and the number of the filters is as follows: anglenum 180 degree/Angleinc, then carrying out interpolation processing to obtain corresponding Gabor filtering template, corresponding to each image block, using filter with corresponding indexed angle and frequency to operate, and obtaining enhanced image IE
7. The method for labeling fingerprint minutiae database for deep learning according to claim 1, wherein the step 6 comprises: converting images with different gray levels into binary images, increasing the contrast between the ridge lines of the images and the background, neglecting the brightness of the ridge lines, unifying the gray values of the ridge lines, setting the gray value of the pixels of the ridge lines as 0, and setting the value of the pixels of the background as 255;
the step 7 comprises the following steps: the binary image is subjected to preset n times of iterative morphological processing until the image is not changed any more, and the negative influence of the false end point generated at the edge of the image is eliminated by performing truncation operation on the mask and acting on the thinned image.
8. The method for labeling fingerprint minutiae database for deep learning according to claim 1, wherein the step 8 comprises:
according to the thinned fingerprint image, counting different pixel logarithms of every two adjacent pixels in eight neighborhood pixels of each pixel in the image, if the value is 7, indicating that the point is an end point, and if the value is 5, indicating that the pixel is a cross point;
an etching operation is applied to the mask to reduce false feature points of the edges and background.
9. The method for labeling fingerprint minutiae database for deep learning according to claim 1, wherein the step 9 comprises:
based on the feature point set M, deleting pseudo feature points by using the distance between different feature points, and counting two feature points p1,p2Deleting redundancy through a preset threshold thresh to obtain a filtered feature point set Mp-s
For the filtered feature point set Mp-sAnd manually editing, namely correcting the characteristic points with labels not meeting the preset requirement by deleting the existing abnormal points, moving and translocating the abnormal points and adding unmarked characteristic points to obtain a final characteristic point set Mf
10. A fingerprint minutiae database annotation system for deep learning, which is characterized in that the fingerprint minutiae database annotation method for deep learning of any one of claims 1 to 9 is adopted, and comprises the following steps:
Module M1: inputting an original fingerprint image I, performing fingerprint interest region segmentation based on gray variance, and generating a mask;
module M2: normalizing the original fingerprint image I;
module M3: estimating the fingerprint direction based on a gradient method;
module M4: performing frequency estimation based on the mask, the directional window and the ridge gray projection;
module M5: fingerprint image enhancement is carried out based on Gabor filtering, the direction field and the frequency field;
module M6: carrying out binarization on the enhanced image;
module M7: thinning the binarized image, and deleting edge pixels of the lines in the binarized image to obtain a line skeleton image with the width of a unit pixel;
module M8: extracting detail points of the image after the thinning operation based on prior definition to obtain a feature point set M;
module M9: based on the feature point set M, deleting the pseudo feature points and marking the feature points which do not meet the preset requirement, and obtaining the final feature point set Mf
CN202111386111.7A 2021-11-22 2021-11-22 Fingerprint detail database labeling method and system for deep learning Pending CN114677552A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111386111.7A CN114677552A (en) 2021-11-22 2021-11-22 Fingerprint detail database labeling method and system for deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111386111.7A CN114677552A (en) 2021-11-22 2021-11-22 Fingerprint detail database labeling method and system for deep learning

Publications (1)

Publication Number Publication Date
CN114677552A true CN114677552A (en) 2022-06-28

Family

ID=82070791

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111386111.7A Pending CN114677552A (en) 2021-11-22 2021-11-22 Fingerprint detail database labeling method and system for deep learning

Country Status (1)

Country Link
CN (1) CN114677552A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823679A (en) * 2023-08-30 2023-09-29 山东龙腾控股有限公司 Full-automatic fingerprint lock fingerprint image enhancement method based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823679A (en) * 2023-08-30 2023-09-29 山东龙腾控股有限公司 Full-automatic fingerprint lock fingerprint image enhancement method based on artificial intelligence
CN116823679B (en) * 2023-08-30 2023-12-05 山东龙腾控股有限公司 Full-automatic fingerprint lock fingerprint image enhancement method based on artificial intelligence

Similar Documents

Publication Publication Date Title
Singla et al. Automated latent fingerprint identification system: A review
Wang et al. iVAT and aVAT: enhanced visual analysis for cluster tendency assessment
US9971929B2 (en) Fingerprint classification system and method using regular expression machines
Dibeklioglu et al. 3D facial landmarking under expression, pose, and occlusion variations
Peralta et al. Minutiae filtering to improve both efficacy and efficiency of fingerprint matching algorithms
CN113240623B (en) Pavement disease detection method and device
Akhtar et al. Optical character recognition (OCR) using partial least square (PLS) based feature reduction: an application to artificial intelligence for biometric identification
Boussellaa et al. Unsupervised block covering analysis for text-line segmentation of Arabic ancient handwritten document images
CN113393454A (en) Method and device for segmenting pathological target examples in biopsy tissues
Warif et al. CMF-iteMS: An automatic threshold selection for detection of copy-move forgery
CN115151952A (en) High-precision identification method and system for power transformation equipment
CN112200789B (en) Image recognition method and device, electronic equipment and storage medium
KR20210082624A (en) Fingerprint Enhancement method
CN114677552A (en) Fingerprint detail database labeling method and system for deep learning
Shi et al. License plate localization in complex environments based on improved GrabCut algorithm
CN116777917B (en) Defect detection method and system for optical cable production
Sujin et al. High-performance image forgery detection via adaptive SIFT feature extraction for low-contrast or small or smooth copy–move region images
Surmacz et al. An improved algorithm for feature extraction from a fingerprint fuzzy image
CN111488811A (en) Face recognition method and device, terminal equipment and computer readable medium
Deb et al. Automatic vehicle identification by plate recognition for intelligent transportation system applications
CN116343300A (en) Face feature labeling method, device, terminal and medium
Liu et al. A feedback paradigm for latent fingerprint matching
CN111753722B (en) Fingerprint identification method and device based on feature point type
Pugalenthi et al. Latent dactyloscopy pairing: presentation attained through feedback from EPITOME
Kuban et al. A NOVEL MODIFICATION OF SURF ALGORITHM FOR FINGERPRINT MATCHING.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination