WO2019021177A1 - Method of and system for matching fingerprint iimages - Google Patents

Method of and system for matching fingerprint iimages Download PDF

Info

Publication number
WO2019021177A1
WO2019021177A1 PCT/IB2018/055500 IB2018055500W WO2019021177A1 WO 2019021177 A1 WO2019021177 A1 WO 2019021177A1 IB 2018055500 W IB2018055500 W IB 2018055500W WO 2019021177 A1 WO2019021177 A1 WO 2019021177A1
Authority
WO
WIPO (PCT)
Prior art keywords
fingerprint
minutiae
navigation
data relating
foreground
Prior art date
Application number
PCT/IB2018/055500
Other languages
French (fr)
Inventor
Ishmael Sbusiso MSIZA
Original Assignee
Mmapro It Solutions (Pty) Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mmapro It Solutions (Pty) Ltd filed Critical Mmapro It Solutions (Pty) Ltd
Publication of WO2019021177A1 publication Critical patent/WO2019021177A1/en
Priority to ZA2020/01146A priority Critical patent/ZA202001146B/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • G06V40/1365Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof

Definitions

  • THIS invention is in the field of fingerprint analysis, more particularly, this invention relates to a method of and system for matching fingerprints according to their respective minutiae features.
  • Fingerprint matching is the process whereby the degree of similarity between two fingerprint images are determined. Fingerprint matching techniques can, loosely, be ordered into three categories: (i) correlation-based matching, (ii) ridge feature-based matching, and (iii) minutiae-based matching.
  • a fingerprint image like any other image, is a matrix of intensity values whereby the image pixels resemble the cells of a matrix. This pixel examination for similarities could be done for various displacements and rotations, or could be done only once, after optimal alignment of the two fingerprint images.
  • a human fingerprint is composed of a pattern of ridges and furrows, often referred to as a ridge pattern.
  • fingerprints are compared in terms of features extracted from this ridge pattern.
  • minutiae-based matching local fingerprint features, known as minutiae, are extracted from the two fingerprints.
  • Fingerprint minutiae can typically be classified into two categories, namely, (i) ridge endings, and (ii) ridge bifurcations.
  • a ridge ending is defined as the point where a single fingerprint ridge ends abruptly, while a ridge bifurcation is defined as the point where a single fingerprint ridge splits into two ridges.
  • Fingerprint minutiae could be extracted by, for example, dividing a thinned image of the fingerprint into non-overlapping 9x9 pixel blocks, and examining the pattern of neighbouring pixels from eight possible directions.
  • the detected minutiae are then stored in a data structure known as a minutiae template.
  • a fingerprint minutiae template typically includes data relating to the position (expressed as the x- and y- coordinate of each minutia in the x-y plane), angle (denotes the tangential direction of each minutia, measured against a reference line), and the type (typically either a ridge ending or ridge bifurcation).
  • FIG. 1 shows an example minutiae template of an example fingerprint that has a total of 1 6 minutiae. Ten of these minutiae are ridge endings, and six of them are ridge bifurcations. Each row in the minutiae template represents a single minutia. The first and the second columns represent the position descriptor, the third column represents the angle descriptor, and the fourth column represents the type descriptor.
  • FIG. 2 shows the detected minutiae, of FIG. 1 , marked on a thinned image region of interest (ROI) of the example fingerprint.
  • ROI thinned image region of interest
  • fingerprint verification When authenticating subjects through the use of their fingerprints, they inevitably get exposed to one of two types of transactions: (i) fingerprint verification, and (ii) fingerprint identification.
  • fingerprint verification a subject first claims a particular identity by, for example, entering in a unique PIN or presenting a personalized card.
  • the recognition system extracts the fingerprint template associated with that PI N or card, and compares it to the template generated from the fingerprint presented by the subject. It is a 1 :1 comparison.
  • a subject does not claim any identity.
  • the individual merely presents its fingerprint to the recognition system, for the system to identify the individual.
  • the system then has to go through the entire database of stored fingerprint templates, comparing the template generated from the presented fingerprint with all the stored templates of fingerprints in the database. It is a 1 :M comparison, where M is the total number of records in the database.
  • FIG. 3 depicts a high-level flow diagram of a known solution 10 that could be used for solving fingerprint minutiae matching problems.
  • the input 12 to a fingerprint minutiae matcher 14 is a set of two minutiae templates (1 6, 18), and the output 20 is a match score 22 whereby the system determines the user's identity by comparing the match score 22 to a threshold set by the administrator.
  • the minutiae matcher 14 is made up of three components: (i) pairwise similarity 24, (ii) templates registration 26, and (iii) minutiae correspondence 28.
  • the ultimate goal in fingerprint minutiae matching is to be able to take two minutiae templates and determine the degree of similarity between the two.
  • a typical fingerprint recognition system there is a database containing a multiplicity of pre- stored minutiae templates.
  • the system extracts the minutiae from the presented fingerprint, generates data relating to a minutiae template of the extracted minutiae, and compares it to data relating to a pre-stored minutiae template.
  • the template extracted from the presented fingerprint is commonly referred to as the query template 1 6, while a pre-stored template is commonly referred to the reference template 1 8.
  • a similar minutiae pair (one minutia from the query template and the other from a reference template) is determined through analysing each minutia relative to its neighbourhood in the same template - that is, locally. Through this local analysis, minutiae descriptors are invariant to both rotation and translation. This, essentially, is preliminary matching, based on the minutiae local structures.
  • the minutiae from the reference and query templates must be optimally aligned before matching - that is, rotation and translation effects should be eliminated.
  • the correction of rotation and translation in a fingerprint template is often referred to as template registration.
  • a template that is free from rotation and translation errors is referred to as a registered template.
  • the similar minutiae pairs, determined from the previous process, serve as registration parameters used to bring the two templates in alignment with each other.
  • S match score
  • a method of pre-registering a minutiae template for use in fingerprint matching comprising:
  • a system of pre-registering a minutiae template for use in fingerprint matching comprising:
  • a memory connected to the processor, the memory containing instructions which when executed by the processor, cause the processor to:
  • a non-transitory computer-readable device storing instructions thereon which when executed by a processor of a computing device performs the functions of:
  • a method of matching fingerprint images comprising the steps of:
  • a system for matching fingerprint images comprising: a processor; and
  • the memory containing instructions which when executed causes the processor to:
  • a non-transitory computer-readable device storing instructions thereon which when executed by a processor of a computing device performs the functions of:
  • FIG. 1 is an example minutiae fingerprint template utilised in fingerprint matching techniques
  • FIG. 2 is an example fingerprint thinned image with minutiae - a ridge ending
  • FIG. 3 is a high-level prior art flow diagram illustrating known steps for matching a first fingerprint to a second fingerprint
  • FIG. 4 is a high-level flow diagram illustrating steps for generating a pre- registered minutiae template , in accordance with the invention
  • FIG. 5 is a high-level flow diagram illustrating further steps for matching a first fingerprint to a second fingerprint, in accordance with the invention ;
  • FIG. 6 is a high-level block diagram illustrating a system for matching a first fingerprint image to a second fingerprint image, in accordance with the invention.
  • FIG. 7 is an example mask image of a captured image of a fingerprint
  • FIG. 8 is an example of a captured image of the first fingerprint, wherein the first fingerprint is rotated in a counter-clockwise direction;
  • FIG. 9 is a schematic representation showing, as a line, the interface of the foreground and background of the first fingerprint of FIG. 8;
  • FIG. 10 is an adjusted or corrected image of the first fingerprint of FIG.9. DETAILED DESCRIPTION OF AN EXAMPLE EMBODIMENT
  • example embodiments of a method of and a system for comparing a first fingerprint image to a second fingerprint image, or parts thereof, are generally designated by the reference numeral 1 00 as shown in FIGS. 4 to 6.
  • FIG. 4 there is shown a flow diagram of a first part of the method 100 of comparing or matching a first fingerprint image to a second fingerprint image, whereby a pre-registered minutiae template is generated.
  • the method comprises, at
  • the first fingerprint image could be captured by way of a fingerprint reader, such as the type shown in FIG. 6 as 124, or by scanning a physical representation of a fingerprint.
  • data relating to at least one singular point of the first fingerprint is generated.
  • the singular point could includes any one of a fingerprint core, fingerprint delta, and/or the centroid of a fingerprint foreground. Singular points are also referred to, and known by those skilled in the art, as global fingerprint features.
  • the step of generating the singular point includes estimating a foreground centroid of the first fingerprint.
  • the step of estimating the foreground centroid of the first fingerprint includes using a foreground separation module (not shown) as described in the corresponding PCT Application claiming priority from South African Patent Application No. 2017/05008 which is incorporated herein by reference.
  • the foreground separation module (not shown) is configured to separate or segment a foreground of the captured image of the first fingerprint from the background of the captured image via a variance-based technique, based on the fact that there is high variance in a foreground of the captured image, and low variance in the background thereof. More specifically, the variance in darkness of pixels in the foreground and background of the enhanced image is, for example, analysed by the foreground separation module (not shown). Typically, a heuristically determined variance threshold would be determined and utilised to separate the foreground from the background.
  • the input of the foreground separation module (not shown) is data relating to the enhanced image
  • the output of the foreground separation module (not shown) is data relating to an exemplary mask image 42, shown in FIG. 7.
  • the foreground 38 (as shown in FIG. 7) is, for example, assigned an intensity value of 255 (grayscale white), whereas the background 40 is assigned an intensity value of
  • the foreground 38 is hence separated from the background 40 by applying a mask to the enhanced image of the fingerprint, thereby masking the background 40 to yield the foreground 38.
  • FIG. 8 shows an example of a captured image 44 of the first fingerprint 30 in accordance with the present invention , but with the first fingerprint 30 being skewed, i.e. rotated in a counter-clockwise direction with respect to the vertical.
  • a subject whose fingerprint is being captured it is difficult for a subject whose fingerprint is being captured to impress its finger correctly without rotating or pivoting its finger.
  • An angle at which the fingerprint image is captured by a fingerprint reader (not shown), with respect to the vertical forms part of a fingerprint quality measure.
  • centroid location module (not shown) would be used to receive as input, data relating to the mask image, similar to that of FIG. 7, which corresponds with the captured image of the first fingerprint 30 as shown in FIG.8.
  • FIG. 9 is a schematic two-dimensional representation 46 of the captured image 44 of FIG. 8 illustrating the outline 48 of the foreground 38 of the first fingerprint 30.
  • the location of the centroid (Xf C , ytc) of the foreground 38 is estimated through two sets of, both, horizontal and vertical navigation in the foreground-area 38 which would ultimately define four navigational coordinates, the mean of which would define the coordinates of the centroid (Xf C , yfc) .
  • the x-coordinate, Xf C , and y-coordinate, ytc are determined through the process of navigating horizontally and vertically (that is, in the x-axis direction and y-axis direction of a Cartesian plane) within a foreground 38 of the captured image 44 of the skewed fingerprint, as will be described in more detail below.
  • the starting point of each navigation set is the centre of the fingerprint image background (Xb,yt>) which are the first and second reference point coordinates of the fingerprint image background 40.
  • the centre of the image background 40 would fall within the foreground area 38.
  • the x- coordinate of the background centre is given by:
  • the method of determining/estimating the location of the centroid (X fC , ytc) in accordance with the invention takes place in several interlinked stages.
  • the first stage includes conducting a first navigation from the reference point 3 ⁇ 4>, y_>, towards a first direction (i e. the right), and a navigation towards a second direction (i.e. the left).
  • the first reference point coordinate Xb defines a first reference point axis A
  • the second reference point coordinate yb defines a second reference point axis C which is transverse, i.e. orthogonal, to the first reference point axis A.
  • a horizontal right-ward navigation starts at the point (Xb, yb) and increases in steps/increments of 1 pixel (i.e. the value of the x-coordinate) of the captured digital image along the A axis in the first direction, while the y-coordinate remains unchanged.
  • the value of this x-coordinate is increased up to the pixel that marks the interface between the foreground 38 and the background 40, if it exists. Alternatively, it is increased up to the pixel that marks the right-most edge of the image background 40 or that of the image 44.
  • This extreme-right x-coordinate is recorded and stored as x r1 , being a first value.
  • the horizontal, left-ward navigation i .e.
  • navigation in a second direction that is opposite the first direction starts at the point (xb.yb) and decreases - in steps of 1 pixel (i.e. the value of the x-coordinate) of the captured image along the A axis in the second direction , while the y-coordinate remains constant.
  • the value of the x-coordinate is decreased up to the pixel that marks the interface between the foreground 38 and the background 40, if it exists. Alternatively, it is decreased up to the pixel that marks the left-most edge of the image background 40 or that of the image 44. This extreme-left x-value is recorded and stored as xn , as a second value.
  • the first navigation reference point (or first navigational set) is defined by first and second navigational coordinates X f1 , y b .
  • the first navigational coordinate X f1 defines a first navigational axis that corresponds with the first reference point axis A, (i.e. horizontal axis A) and the second navigational coordinate yb defines a second navigational axis B (i .e. vertical axis B) which is transverse to the first reference point axis A.
  • the second stage includes conducting a second navigation (i.e. a vertical navigation along the B-axis) with respect to the first navigation set having first and second navigation coordinates X f1 , yo.
  • This vertical navigation occurs in two stages, namely, the upward navigation (in a third direction along the second navigational axis B) and the downward navigation (in a fourth direction that is opposite the third direction along the second navigational axis B).
  • the upward navigation i.e. in the third direction along the axis B). however, starts at the point (X f1 . yb) and decreases - in steps of 1 pixel (i.e. the value of the y-coordinate). while the x-coordinate (i.e. X f1 ) remains unchanged.
  • the y-coordinate is decreased up to the pixel that marks the interface between the foreground 38 and the background 40, if it exists. Alternatively, it is decreased up to the pixel that marks the upper-most edge of the image background 40 or that of the image 44
  • This extreme-top y-coordinate is recorded and stored as yu1, being a fourth value.
  • the downward navigation (in the fourth direction), also, starts at the point (X f1 . yb) and increases - in steps of 1 pixel (i.e. the value of the y-coordinate). while the x-coordinate (i.e. X f1 ) remains constant.
  • the value of the y-coordinate is increased up to the point that marks the interface between the foreground 38 and the background 40 , if it exists. Alternatively, it is increased up to the pixel that marks the lower-most edge of the image background 40 or that of the image 44.
  • This extreme-bottom y-value is recorded and stored as yl1, being a fifth value.
  • the y-coordinate (i.e. yti ) of a second navigation set i.e. second navigation reference point
  • the vertical navigation thus uses the third value ( Xf1) in determining the sixth value ( Yf1) .
  • the second navigation reference point has fifth and sixth navigation coordinates Xf1, Yf1
  • a third navigation set is the same as above; however, it occurs in reverse, whereby it first commences with the vertical navigation along the vertical axis C in a fifth (upward) direction and sixth (downward) direction, with respect to the starting point (Xb, yb). Similar to the above-described process, y u2 , being a seventh value, and yi 2 , being an eighth value, are determined, and accordingly the y-coordinate of the third navigation set (i.e. third navigation reference point), being a ninth value, Yf 2 is determined. As can be seen in FIG.
  • the coordinates of the third navigation set includes fifth and sixth navigation coordinates Xb , Yf2, wherein the fifth navigation coordinate Xb defines a fourth navigation axis D which is parallel to the first reference point axis A and transverse to the third navigation axis C.
  • a fourth navigation set is determined by performing the horizontal navigation along the D axis in a seventh (rightward) direction and eighth (leftward) direction, with starting point (Xb, Yf2). Accordingly, similar to the above-described procedure, ⁇ r2 and xi 2 , which are tenth and eleventh values, are determined. Accordingly the x-coordinate of the fourth navigation set, being a twelfth value, is determined as xf 2 :
  • centroid location module (not shown - however described in detail in the corresponding PCT Application that claims priority from South African Patent Application No. 2017/05008) is able to accurately estimate the centroid location of the foreground 38 by calculating the respective averages of the aforementioned determined third and twelfth values, and sixth and ninth values.
  • a first foreground coordinate, in the x-direction, of the centroid location is determined by:
  • a second foreground coordinate, in the y-direction . of the centroid location is determined by:
  • the first foreground coordinate defines a first foreground axis Fx, being a first foreground axis of the location of the estimated centroid of the foreground 38
  • the second foreground coordinates defines a second foreground axis Fy, being a second foreground axis of the location of the estimated centroid of the foreground 38.
  • the foreground 38 defines a longitudinal axis Y which is slanted relative to the substantially vertical , second foreground axis Fy.
  • the longitudinal axis Y passes through an upper edge G of the foreground 38 and the centroid .
  • the upper edge G is determined by
  • this point is marked as E, and is referred to herein as a theoretical position at which an upper edge of the foreground 38 should be located.
  • This point E has coordinates xe, ye and the ye coordinate has a fifteenth value.
  • the x-values on the left-hand side of point E would be negative, and those on the right-hand side wogld be positive.
  • the method further includes navigating towards the right up to a point that marks the interface between the foreground 38 and the background 40 to establish a sixteenth value in the x-direction.
  • navigating horizontally to the right will immediately fall into the background 40, indicating that the foreground is not skewed in the clockwise direction.
  • the method also includes navigating horizontally to the left in increments of 1 pixel up to a point, G, that marks the interface between the foreground 38 and the background 40.
  • the point G referred to herein as the actual upper edge of the skewed foreground 40 has coordinates x g , y g and the x g coordinate has a seventeenth value.
  • the method further includes calculating the vertical length/distance between the point y e (i.e. the fifteenth value) and the foreground centroid y-coordinate, yt c (i.e. the fourteenth value) and, and also calculating the horizontal distance between the point x g and the point x e .
  • the method includes calculating an angle a as follows:
  • Tan(a) (horizontal distance between point x g and point x e )/(vertical distance between point y e and point ytc)
  • the angle a defined between the longitudinal axis Y and the second foreground axis F y indicates the extent at which the point G of the foreground 38 (and essentially the foreground 38) is skewed with respect to the second foreground axis Fy (as shown in Figure 8).
  • a foreground centroid as shown in FIG. 9 with the coordinates Xf C , Vf C , which is determined by the centroid location module (not shown), as described above, is preferably defined as the accurate geometric centre of the foreground 38 of the first fingerprint 30.
  • the location of this fingerprint centroid may be performed via locating a geometric centre of the second example image 44 which corresponds to the arithmetic mean or average position of all the points or pixels in the foreground 38 of the image 44 with respect to a reference point of the schematic representation 46 of the image 44, as described previously.
  • the first fingerprint 30 is adjusted by any one of its orientation and location by utilizing the data relating to at least one singular point (i.e.
  • the rotation estimation module (not shown - however described in detail in the corresponding PCT Application that claims priority from South African Patent Application No. 2017/05008).
  • the rotation estimation module (not shown) is configured to estimate the orientation angle a between, for example, the longitudinal axis of the foreground Y as described above relative to the second foreground axis F y , as described above.
  • the output of the orientation estimation module is a data structure comprising the angle a and a rotation direction such as clock-wise or anti-clockwise, as described above, indicating the direction in which the captured image 44 (i.e. combination of the foreground and background thereof) must be rotated by a from the longitudinal axis Y in the direction of the first foreground axis Fy.
  • This data structure may be stored into a database (not shown).
  • the orientation module would rotate the image in the clockwise direction.
  • the orientation module (not shown) would rotate the image 44 in the anti-clockwise direction.
  • Application No. 201 7/05008 is configured to generate a corrected image 52 (as shown in FIG. 1 0) by pivoting or rotating the image 44 by the estimated orientation angle a.
  • the corrected image 52 is generated by pivoting or rotating the captured image 44 of the skewed, first fingerprint 30 by the angle a in the clockwise direction, i.e. in the direction of the foreground axis F y .
  • the resultant/corrected image 52 will have the longitudinal axis Y thereof taking the place of the second foreground axis F y (i.e. being substantially upright), and the second foreground axis F y will accordingly be rotated in the clockwise direction by the estimated angle a, as shown in FIG. 10.
  • the foreground centroid which is denoted therein by reference numeral
  • a translation module (not shown) at 106, which is arranged to, for example, move the centroid 50 of the foreground 38, along with the foreground 38 of the corrected image 52 until the location of the centroid 50 corresponds with the centre, i.e. Xb, yb coordinates, of the captured image 44.
  • FIG. 5 illustrates a flow diagram of a second part of the method 100 of comparing or matching the first fingerprint image to a second fingerprint image, as described herein.
  • the method 100 in order to verify the subject associated with the first fingerprint 30, the method 100 needs to match the registered template 1 10, hereinafter referred to as a pre-registered, minutiae query template 1 10 to a pre-registered, minutiae reference template 1 1 2 of a second fingerprint image (i.e. reference image).
  • the data relating to the pre-registered, minutiae reference template 1 1 2 would have been pre-stored on a database. It should be appreciated that before the pre-registered, minutiae reference template 1 12 was stored onto the database, its corresponding captured image (i.e. reference image) would have been subjected to the same steps as explained above with reference to the first fingerprint.
  • the pre-registered, minutiae query template 1 1 0 and pre-registered, minutiae reference template 1 12 thus serve as inputs 1 1 4 to a fingerprint matching module 1 16.
  • the module 1 16 accesses the database and collects data relating to pre- registered, minutiae reference template 1 1 2, and also collects the data relating to the pre-registered, minutiae query template 1 10 and compares the two 1 10, 1 12 for minutiae correspondences.
  • the output 120 of the matching model 1 16 is thus data relating to a match score 122 based on the number of fingerprint minutiae of the adjusted first fingerprint image that matches the fingerprint minutiae of the second , reference fingerprint image.
  • the matching model 1 16 compares each minutia entry of the pre-registered, minutiae query template 1 10 against each minutia entry of the pre-registered, minutiae reference template 1 1 2.
  • the newly proposed minutiae matching module 1 16 is made up of only one component, being minutiae correspondence 1 1 8.
  • the registration of templates is done outside the fingerprint minutiae matching module 1 16, a decision that significantly simplifies the matching exercise that needs to be performed by the matching module 1 16, reducing it to only finding minutiae correspondences 1 18.
  • Minutiae-based matching essentially consists of finding the alignment between the query template 1 10 and a reference template 1 12 that result in the maximum number of minutiae pairings and/or correspondences. This, therefore, implies that template registration is mandatory in order to maximize the number of matching minutiae.
  • This invention proposes that template registration should be done through fingerprint registration, on the basis of the fingerprint global structures (i.e. any one of a fingerprint core, fingerprint delta, and the centroid of a fingerprint foreground). This implies that a fingerprint should first be optimally registered (i.e. it's orientation and placement must be adjusted), before extracting a minutiae template. Matching high quality fingerprints with small intraclass variations is not difficult and every reasonable algorithm can do it.
  • the ridges associated with the minutiae are used to estimate the alignment parameters. This implies that the size of the templates has to be large otherwise the alignment will not be accurate.
  • a large minutiae template takes up a lot of storage space and, upon finding minutiae correspondences, uses up a great deal of computing resources.
  • a large template does not guarantee accurate registration, because of the possibility of the presence of spurious minutiae points.
  • a global process on the other hand, is free of spurious features and is not as complex as any process based on local structures. This immediately implies global registration has more advantages than local registration.
  • Global features of a fingerprint are those formed by the high-level ridge pattern, while the local features are those formed by the lower-level ridge pattern.
  • Fingerprint singular points are examples of global features, while fingerprint minutiae are examples of local features. From the captured fingerprint, the first thing is to locate a global reference point to be used for rotation and translation correction as described above.
  • FIG. 6 shows a high-level block diagram illustrating a system 100 for comparing or matching a first fingerprint image to a second fingerprint image.
  • a fingerprint reader
  • the system 100 preferably, comprises the first database 128 and a processor 1 30 which is connected to the first database 128.
  • a memory (not shown) is connected to the processor 130 and is configured to utilise the data relating to the fingerprint image, and has instructions, i.e. a fingerprint matching algorithm 1 16, stored thereon which is configured to be executed by the processor 130.
  • the minutiae reference template 1 1 2 which is described above, is arranged to be stored onto a second database 132.
  • a backend 134 for an operator may be utilised comprising the processor 130 and the second database 1 32.
  • the first database 128 and second database 1 30 form a single database.
  • the invention as described hereinabove describes an approach that uses global structures, such as the singular points, in particular the centroid of a foreground of the captured image of a fingerprint for registration (i.e. orientation and placement), 5 and minutiae global structures for matching.
  • This approach generates a pre- registered, minutiae template for both storage and comparison, and trivializes the matching exercise because it reduces it to just finding minutiae correspondences.
  • it allows for the creation of a compact minutiae template, allows for accurate and seamless registration, and is free from spurious (often local) features.

Abstract

The invention relates to a method of matching fingerprint images, the method comprising the steps of: providing data relating to a pre-registered, minutiae query template of a first, query fingerprint image; providing data relating to a pre-registered, minutiae reference template of a second, reference fingerprint image; comparing the data relating to the pre-registered, minutiae template and pre-registered, minutiae reference template; and generating data relating to a match score based on the number of fingerprint minutiae of the pre-registered, minutiae query template that matches the fingerprint minutiae of the pre-registered, minutiae reference template of the second, reference fingerprint image.

Description

METHOD OF AND SYSTEM FOR MATCHING FINGERPRINT IMAGES
FIELD OF INVENTION THIS invention is in the field of fingerprint analysis, more particularly, this invention relates to a method of and system for matching fingerprints according to their respective minutiae features.
BACKGROUND OF INVENTION
Fingerprint matching is the process whereby the degree of similarity between two fingerprint images are determined. Fingerprint matching techniques can, loosely, be ordered into three categories: (i) correlation-based matching, (ii) ridge feature-based matching, and (iii) minutiae-based matching.
In correlation-based matching, two fingerprint images are superimposed and the corresponding pixels are examined for similarities. A fingerprint image, like any other image, is a matrix of intensity values whereby the image pixels resemble the cells of a matrix. This pixel examination for similarities could be done for various displacements and rotations, or could be done only once, after optimal alignment of the two fingerprint images.
A human fingerprint is composed of a pattern of ridges and furrows, often referred to as a ridge pattern. In ridge feature-based matching, fingerprints are compared in terms of features extracted from this ridge pattern. In minutiae-based matching, local fingerprint features, known as minutiae, are extracted from the two fingerprints. Fingerprint minutiae can typically be classified into two categories, namely, (i) ridge endings, and (ii) ridge bifurcations. A ridge ending is defined as the point where a single fingerprint ridge ends abruptly, while a ridge bifurcation is defined as the point where a single fingerprint ridge splits into two ridges.
Fingerprint minutiae could be extracted by, for example, dividing a thinned image of the fingerprint into non-overlapping 9x9 pixel blocks, and examining the pattern of neighbouring pixels from eight possible directions. The detected minutiae are then stored in a data structure known as a minutiae template. A fingerprint minutiae template typically includes data relating to the position (expressed as the x- and y- coordinate of each minutia in the x-y plane), angle (denotes the tangential direction of each minutia, measured against a reference line), and the type (typically either a ridge ending or ridge bifurcation).
FIG. 1 shows an example minutiae template of an example fingerprint that has a total of 1 6 minutiae. Ten of these minutiae are ridge endings, and six of them are ridge bifurcations. Each row in the minutiae template represents a single minutia. The first and the second columns represent the position descriptor, the third column represents the angle descriptor, and the fourth column represents the type descriptor. FIG. 2 shows the detected minutiae, of FIG. 1 , marked on a thinned image region of interest (ROI) of the example fingerprint.
When authenticating subjects through the use of their fingerprints, they inevitably get exposed to one of two types of transactions: (i) fingerprint verification, and (ii) fingerprint identification. With fingerprint verification, a subject first claims a particular identity by, for example, entering in a unique PIN or presenting a personalized card. The recognition system then extracts the fingerprint template associated with that PI N or card, and compares it to the template generated from the fingerprint presented by the subject. It is a 1 :1 comparison.
It is, however, not easy to detect the correspondences between minutiae pairs, even for a 1 :1 comparison as two templates extracted from the same finger could have large variations - known as intraclass variations. These variations could be contributed to anyone or more of fingerprint rotation, fingerprint translation, skin conditions (cuts, wetness, dryness, etc.), the amount of pressure applied on the sensor surface, and elastic deformation (mapping a three-dimensional object onto a two-dimensional sensor surface).
In a fingerprint identification transaction, a subject does not claim any identity. The individual merely presents its fingerprint to the recognition system, for the system to identify the individual. The system then has to go through the entire database of stored fingerprint templates, comparing the template generated from the presented fingerprint with all the stored templates of fingerprints in the database. It is a 1 :M comparison, where M is the total number of records in the database.
It is easy to begin to believe that, for a 1 :M comparison, the correspondences between minutiae pairs are, in most instances, significantly less. This, however, is not the case because two templates extracted from different fingers could have large similarities - known as interclass similarities. These similarities are due to the fact that there are only three major types of fingerprint patterns (arch, loop, and circular). All of these realities suggest that the fingerprint minutiae matching exercise is a complicated and sophisticated point pattern recognition problem. FIG. 3 depicts a high-level flow diagram of a known solution 10 that could be used for solving fingerprint minutiae matching problems. The input 12 to a fingerprint minutiae matcher 14 is a set of two minutiae templates (1 6, 18), and the output 20 is a match score 22 whereby the system determines the user's identity by comparing the match score 22 to a threshold set by the administrator. The minutiae templates
(16, 18) are generated from captured fingerprint images whose rotation and/or translation have not been corrected. As shown, the minutiae matcher 14 is made up of three components: (i) pairwise similarity 24, (ii) templates registration 26, and (iii) minutiae correspondence 28.
The ultimate goal in fingerprint minutiae matching is to be able to take two minutiae templates and determine the degree of similarity between the two. In a typical fingerprint recognition system, there is a database containing a multiplicity of pre- stored minutiae templates. When a subject presents its fingerprint to the recognition system for authentication, the system extracts the minutiae from the presented fingerprint, generates data relating to a minutiae template of the extracted minutiae, and compares it to data relating to a pre-stored minutiae template. The template extracted from the presented fingerprint is commonly referred to as the query template 1 6, while a pre-stored template is commonly referred to the reference template 1 8.
A similar minutiae pair (one minutia from the query template and the other from a reference template) is determined through analysing each minutia relative to its neighbourhood in the same template - that is, locally. Through this local analysis, minutiae descriptors are invariant to both rotation and translation. This, essentially, is preliminary matching, based on the minutiae local structures.
Due to variations in finger placement, the minutiae from the reference and query templates must be optimally aligned before matching - that is, rotation and translation effects should be eliminated. In the fingerprint minutiae matching community, the correction of rotation and translation in a fingerprint template is often referred to as template registration. Accordingly, a template that is free from rotation and translation errors is referred to as a registered template. The similar minutiae pairs, determined from the previous process, serve as registration parameters used to bring the two templates in alignment with each other.
After registering the two templates, the minutiae matcher determines the number of pairs of matching minutiae. These are two minutia points in the different templates that have a similar location and direction. Each minutia of a particular template is hereby analysed outside its neighbourhood and against the other template - that is, globally. Two minutia points have a similar location if the Euclidian distance between them is less than a pre-specified threshold, and they have a similar direction if the arithmetic difference between their angle descriptors is, as well, less than a pre- specified threshold. Each minutia in the query template is associated with a maximum of one minutia in the reference template. This, in essence, is final matching, based on the minutiae global structures between the respective templates. The number of pairs of matching minutiae determines the match score (S) which, when compared to a pre-specified threshold (T), grants an ACCEPT (if S >= T) or REJECT (if S < T) decision.
It is well known to use a combination of minutiae local structures (i.e. minutiae in the same template) for preliminary matching and registration, and minutiae global structures (i.e. corresponding minutiae between the templates), for final matching. However, since fingerprint matching is an extremely sophisticated problem, known methods, including method 10, for executing fingerprint minutiae matching are complicated and computationally expensive.
It is accordingly an object of the present invention to provide a method of and system for comparing a first fingerprint to a second fingerprint with which the applicant believes the aforementioned problems may at least be alleviated and/or which may provide a useful alternative to the known systems and/or methods.
SUMMARY OF INVENTION
According to a first aspect of the invention, there is provided a method of pre- registering a minutiae template for use in fingerprint matching, the method comprising:
providing data relating to a captured image of a fingerprint;
generating data relating to at least one singular point of the fingerprint;
adjusting either the orientation or location or both the orientation and location of the fingerprint by utilizing the data relating to the at least one singular point of the fingerprint; and
generating data relating to fingerprint minutiae of the adjusted fingerprint image.
According to a second aspect of the invention, there is provided a system of pre- registering a minutiae template for use in fingerprint matching, the system comprising:
a processor; and
a memory connected to the processor, the memory containing instructions which when executed by the processor, cause the processor to:
provide data relating to a captured image of a fingerprint; generate data relating to at least one singular point of the fingerprint; adjust either the orientation or location or both the orientation and location of the fingerprint by utilizing the data relating to the at least one singular point of the fingerprint; and generate data relating to fingerprint minutiae of the adjusted fingerprint image.
According to a third aspect of the invention, there is provided a non-transitory computer-readable device storing instructions thereon which when executed by a processor of a computing device performs the functions of:
providing data relating to a captured image of a fingerprint;
generating data relating to at least one singular point of the fingerprint;
adjusting either the orientation or location or both the orientation and location of the fingerprint by utilizing the data relating to the at least one singular point of the fingerprint; and
generating data relating to fingerprint minutiae of the adjusted fingerprint.
According to a fourth aspect of the invention, there is provided a method of matching fingerprint images, the method comprising the steps of:
providing data relating to a pre-registered, minutiae query template of a first, query fingerprint image;
providing data relating to a pre-registered, minutiae reference template of a second, reference fingerprint image;
comparing the data relating to the pre-registered, minutiae query template and pre-registered, minutiae reference template; and
generating data relating to a match score based on the number of fingerprint minutiae of the pre-registered, minutiae query template that matches the fingerprint minutiae of the pre-registered, minutiae reference template of the second, reference fingerprint image.
According to a fifth aspect of the invention, there is provided a system for matching fingerprint images, the system comprising: a processor; and
a memory that is connected to the processor, the memory containing instructions which when executed causes the processor to:
provide data that is related to a pre-registered, minutiae query template of a first, query fingerprint image and data relating to a pre-registered, minutiae reference template of a second, reference fingerprint image;
compare the pre-registered, minutiae query template of the first fingerprint image and the pre-registered, minutiae reference template of the second fingerprint image; and
generate data relating to a match score based on the number of fingerprint minutiae of the pre-registered, minutiae query template that matches the fingerprint minutiae of the pre-registered, minutiae reference template of the second fingerprint image.
According to a sixth aspect of the invention, there is provided a non-transitory computer-readable device storing instructions thereon which when executed by a processor of a computing device performs the functions of:
providing data relating to a pre-registered, minutiae query template of a first, query fingerprint image;
providing data relating to a pre-registered, minutiae reference template of a second, reference image;
comparing the data relating to the pre-registered, minutiae query template and pre-registered, minutiae reference template; and generating data relating to a match score based on the number of fingerprint minutiae of the pre-registered, minutiae query template that matches the fingerprint minutiae of the pre-registered, minutiae reference template. BRIEF DESCRIPTION OF DRAWINGS
The objects of this invention and the manner of obtaining them will become more apparent, and the invention itself will be better understood , by reference to the following description of embodiments of the invention taken in conjunction with the accompanying diagrammatic drawings, wherein :
FIG. 1 is an example minutiae fingerprint template utilised in fingerprint matching techniques;
FIG. 2 is an example fingerprint thinned image with minutiae - a ridge ending
(denoted by a square) and a ridge bifurcation (denoted by a cross);
FIG. 3 is a high-level prior art flow diagram illustrating known steps for matching a first fingerprint to a second fingerprint;
FIG. 4 is a high-level flow diagram illustrating steps for generating a pre- registered minutiae template , in accordance with the invention;
FIG. 5 is a high-level flow diagram illustrating further steps for matching a first fingerprint to a second fingerprint, in accordance with the invention ;
FIG. 6 is a high-level block diagram illustrating a system for matching a first fingerprint image to a second fingerprint image, in accordance with the invention;
FIG. 7 is an example mask image of a captured image of a fingerprint;
FIG. 8 is an example of a captured image of the first fingerprint, wherein the first fingerprint is rotated in a counter-clockwise direction;
FIG. 9 is a schematic representation showing, as a line, the interface of the foreground and background of the first fingerprint of FIG. 8; and
FIG. 10 is an adjusted or corrected image of the first fingerprint of FIG.9. DETAILED DESCRIPTION OF AN EXAMPLE EMBODIMENT
The following description of the invention is provided as an enabling teaching of the invention. Those skilled in the relevant art will recognise that many changes can be made to the embodiment described, while still attaining the beneficial results of the present invention. It will also be apparent that some of the desired benefits of the present invention can be attained by selecting some of the features of the present invention without utilising other features. Accordingly, those skilled in the art will recognise that modifications and adaptations to the present invention are possible and can even be desirable in certain circumstances, and are a part of the present invention. Thus, the following description is provided as illustrative of the principles of the present invention and not a limitation thereof.
Referring to the FIGS. 1 to 10, in which like features are indicated by like numerals, example embodiments of a method of and a system for comparing a first fingerprint image to a second fingerprint image, or parts thereof, are generally designated by the reference numeral 1 00 as shown in FIGS. 4 to 6.
Referring to FIG. 4, there is shown a flow diagram of a first part of the method 100 of comparing or matching a first fingerprint image to a second fingerprint image, whereby a pre-registered minutiae template is generated. The method comprises, at
1 02, providing data relating to a first captured image of the first fingerprint. It will be appreciated that the first fingerprint image could be captured by way of a fingerprint reader, such as the type shown in FIG. 6 as 124, or by scanning a physical representation of a fingerprint. At 104, data relating to at least one singular point of the first fingerprint is generated. As previously stated, the singular point could includes any one of a fingerprint core, fingerprint delta, and/or the centroid of a fingerprint foreground. Singular points are also referred to, and known by those skilled in the art, as global fingerprint features. In a preferred embodiment, the step of generating the singular point includes estimating a foreground centroid of the first fingerprint. The step of estimating the foreground centroid of the first fingerprint includes using a foreground separation module (not shown) as described in the corresponding PCT Application claiming priority from South African Patent Application No. 2017/05008 which is incorporated herein by reference.
Typically, the foreground separation module (not shown) is configured to separate or segment a foreground of the captured image of the first fingerprint from the background of the captured image via a variance-based technique, based on the fact that there is high variance in a foreground of the captured image, and low variance in the background thereof. More specifically, the variance in darkness of pixels in the foreground and background of the enhanced image is, for example, analysed by the foreground separation module (not shown). Typically, a heuristically determined variance threshold would be determined and utilised to separate the foreground from the background. The input of the foreground separation module (not shown) is data relating to the enhanced image, and the output of the foreground separation module (not shown) is data relating to an exemplary mask image 42, shown in FIG. 7. The foreground 38 (as shown in FIG. 7) is, for example, assigned an intensity value of 255 (grayscale white), whereas the background 40 is assigned an intensity value of
0 (grayscale black). The foreground 38 is hence separated from the background 40 by applying a mask to the enhanced image of the fingerprint, thereby masking the background 40 to yield the foreground 38.
FIG. 8 shows an example of a captured image 44 of the first fingerprint 30 in accordance with the present invention , but with the first fingerprint 30 being skewed, i.e. rotated in a counter-clockwise direction with respect to the vertical. In practice, it is difficult for a subject whose fingerprint is being captured to impress its finger correctly without rotating or pivoting its finger. An angle at which the fingerprint image is captured by a fingerprint reader (not shown), with respect to the vertical, forms part of a fingerprint quality measure. There is a need to find an efficient reference point location technique, that is able to process all types of fingerprints and that preferably performs same as part of the early processes of fingerprint manipulation and analyses.
Upon separating the foreground from the background of the captured image of the first fingerprint 30, a centroid location module (not shown) would be used to receive as input, data relating to the mask image, similar to that of FIG. 7, which corresponds with the captured image of the first fingerprint 30 as shown in FIG.8.
FIG. 9 is a schematic two-dimensional representation 46 of the captured image 44 of FIG. 8 illustrating the outline 48 of the foreground 38 of the first fingerprint 30. The location of the centroid (XfC, ytc) of the foreground 38 is estimated through two sets of, both, horizontal and vertical navigation in the foreground-area 38 which would ultimately define four navigational coordinates, the mean of which would define the coordinates of the centroid (XfC , yfc) . The x-coordinate, XfC, and y-coordinate, ytc, are determined through the process of navigating horizontally and vertically (that is, in the x-axis direction and y-axis direction of a Cartesian plane) within a foreground 38 of the captured image 44 of the skewed fingerprint, as will be described in more detail below.
The starting point of each navigation set is the centre of the fingerprint image background (Xb,yt>) which are the first and second reference point coordinates of the fingerprint image background 40. Preferably, the centre of the image background 40 would fall within the foreground area 38. For an image of width w and height h, the x- coordinate of the background centre is given by:
Xb = 0.5 x w,
while the y-coordinate of the background centre is given by:
yb = 0.5 x h. Accordingly, the method of determining/estimating the location of the centroid (XfC, ytc) in accordance with the invention, takes place in several interlinked stages. The first stage includes conducting a first navigation from the reference point ¾>, y_>, towards a first direction (i e. the right), and a navigation towards a second direction (i.e. the left). The first reference point coordinate Xb defines a first reference point axis A, and the second reference point coordinate yb defines a second reference point axis C which is transverse, i.e. orthogonal, to the first reference point axis A. A horizontal right-ward navigation starts at the point (Xb, yb) and increases in steps/increments of 1 pixel (i.e. the value of the x-coordinate) of the captured digital image along the A axis in the first direction, while the y-coordinate remains unchanged. The value of this x-coordinate is increased up to the pixel that marks the interface between the foreground 38 and the background 40, if it exists. Alternatively, it is increased up to the pixel that marks the right-most edge of the image background 40 or that of the image 44. This extreme-right x-coordinate is recorded and stored as xr1, being a first value. Similarly, the horizontal, left-ward navigation (i .e. navigation in a second direction that is opposite the first direction) starts at the point (xb.yb) and decreases - in steps of 1 pixel (i.e. the value of the x-coordinate) of the captured image along the A axis in the second direction , while the y-coordinate remains constant. The value of the x-coordinate is decreased up to the pixel that marks the interface between the foreground 38 and the background 40, if it exists. Alternatively, it is decreased up to the pixel that marks the left-most edge of the image background 40 or that of the image 44. This extreme-left x-value is recorded and stored as xn , as a second value. Following this two-stage horizontal navigation along the A axis in the first and second directions, it now becomes possible to compute an x-coordinate (i.e. Xf1) of a first navigational reference point (or first navigation set), being a third value. The third value is given by:
Figure imgf000015_0001
As can be seen in FIG.9, the first navigation reference point (or first navigational set) is defined by first and second navigational coordinates Xf1, yb. The first navigational coordinate Xf1 defines a first navigational axis that corresponds with the first reference point axis A, (i.e. horizontal axis A) and the second navigational coordinate yb defines a second navigational axis B (i .e. vertical axis B) which is transverse to the first reference point axis A.
The second stage includes conducting a second navigation (i.e. a vertical navigation along the B-axis) with respect to the first navigation set having first and second navigation coordinates Xf1, yo. This vertical navigation occurs in two stages, namely, the upward navigation (in a third direction along the second navigational axis B) and the downward navigation (in a fourth direction that is opposite the third direction along the second navigational axis B). The upward navigation (i.e. in the third direction along the axis B). however, starts at the point (Xf1. yb) and decreases - in steps of 1 pixel (i.e. the value of the y-coordinate). while the x-coordinate (i.e. Xf1) remains unchanged. The y-coordinate is decreased up to the pixel that marks the interface between the foreground 38 and the background 40, if it exists. Alternatively, it is decreased up to the pixel that marks the upper-most edge of the image background 40 or that of the image 44 This extreme-top y-coordinate is recorded and stored as yu1, being a fourth value. The downward navigation (in the fourth direction), also, starts at the point (Xf1. yb) and increases - in steps of 1 pixel (i.e. the value of the y-coordinate). while the x-coordinate (i.e. Xf1) remains constant. The value of the y-coordinate is increased up to the point that marks the interface between the foreground 38 and the background 40 , if it exists. Alternatively, it is increased up to the pixel that marks the lower-most edge of the image background 40 or that of the image 44. This extreme-bottom y-value is recorded and stored as yl1, being a fifth value. Following this two-stage vertical navigation, it now becomes possible to compute the y-coordinate (i.e. yti ) of a second navigation set (i.e. second navigation reference point), being a sixth value, as follows:
Figure imgf000016_0001
It follows from the above description that the vertical navigation thus uses the third value ( Xf1) in determining the sixth value ( Yf1) . As can be seen in FIG. 9, the second navigation reference point has fifth and sixth navigation coordinates Xf1, Yf1
A third navigation set is the same as above; however, it occurs in reverse, whereby it first commences with the vertical navigation along the vertical axis C in a fifth (upward) direction and sixth (downward) direction, with respect to the starting point (Xb, yb). Similar to the above-described process, yu2, being a seventh value, and yi2, being an eighth value, are determined, and accordingly the y-coordinate of the third navigation set (i.e. third navigation reference point), being a ninth value, Yf 2 is determined. As can be seen in FIG. 9, the coordinates of the third navigation set includes fifth and sixth navigation coordinates Xb , Yf2, wherein the fifth navigation coordinate Xb defines a fourth navigation axis D which is parallel to the first reference point axis A and transverse to the third navigation axis C.
Subsequently, a fourth navigation set is determined by performing the horizontal navigation along the D axis in a seventh (rightward) direction and eighth (leftward) direction, with starting point (Xb, Yf2). Accordingly, similar to the above-described procedure, χr2 and xi2, which are tenth and eleventh values, are determined. Accordingly the x-coordinate of the fourth navigation set, being a twelfth value, is determined as xf2:
Following the above navigations, the centroid location module (not shown - however described in detail in the corresponding PCT Application that claims priority from South African Patent Application No. 2017/05008) is able to accurately estimate the centroid location of the foreground 38 by calculating the respective averages of the aforementioned determined third and twelfth values, and sixth and ninth values. A first foreground coordinate, in the x-direction, of the centroid location is determined by:
Figure imgf000018_0001
Also, a second foreground coordinate, in the y-direction . of the centroid location is determined by:
Figure imgf000018_0002
It should be appreciated that in further example embodiments of the invention, the coordinates of the centroid location could be
Figure imgf000018_0003
As can be seen in FIG. 9, the first foreground coordinate defines a first foreground axis Fx, being a first foreground axis of the location of the estimated centroid of the foreground 38 The second foreground coordinates defines a second foreground axis Fy, being a second foreground axis of the location of the estimated centroid of
Figure imgf000018_0007
the foreground 38. Again , as seen in FIG 9, the foreground 38 defines a longitudinal axis Y which is slanted relative to the substantially vertical , second foreground axis Fy. The longitudinal axis Y passes through an upper edge G of the foreground 38 and the centroid . In an embodiment, the upper edge G is determined by
Figure imgf000018_0005
navigating upwardly, in steps of 1-pixel, from the foreground centroid while
Figure imgf000018_0004
maintaining the
Figure imgf000018_0006
coordinate constant) up to the point that marks the interface between the foreground 38 and the background 40. I n FIG. 9, this point is marked as E, and is referred to herein as a theoretical position at which an upper edge of the foreground 38 should be located. This point E has coordinates xe, ye and the ye coordinate has a fifteenth value. By way of example, the x-values on the left-hand side of point E would be negative, and those on the right-hand side wogld be positive. Now in order to establish whether the foreground 38 is skewed with respect to the vertical, the method further includes navigating towards the right up to a point that marks the interface between the foreground 38 and the background 40 to establish a sixteenth value in the x-direction. In this example, as shown in FIG. 9, navigating horizontally to the right will immediately fall into the background 40, indicating that the foreground is not skewed in the clockwise direction. The method also includes navigating horizontally to the left in increments of 1 pixel up to a point, G, that marks the interface between the foreground 38 and the background 40. The point G, referred to herein as the actual upper edge of the skewed foreground 40 has coordinates xg, yg and the xg coordinate has a seventeenth value.
The method further includes calculating the vertical length/distance between the point ye (i.e. the fifteenth value) and the foreground centroid y-coordinate, ytc (i.e. the fourteenth value) and, and also calculating the horizontal distance between the point xg and the point xe. Once these distances have been computed, the method includes calculating an angle a as follows:
Tan(a) = (horizontal distance between point xg and point xe)/(vertical distance between point ye and point ytc)
a = Arctan{(horizontal distance between point xg and point xe)/(vertical distance between point ye and point ytc)}
In general, when the resultant value of a is negative, the negative value would indicate that foreground is skewed in the anti-clockwise direction and the fingerprint image 44 would need to be rotated in the clockwise direction by the value of a. Similar, a positive a value would be indicative that the foreground 40 is skewed in the clockwise direction, and needs to be rotated in the anticlockwise direction in order to correct the orientation of the captured fingerprint image. In general, the angle a defined between the longitudinal axis Y and the second foreground axis Fy indicates the extent at which the point G of the foreground 38 (and essentially the foreground 38) is skewed with respect to the second foreground axis Fy (as shown in Figure 8).
As mentioned previously, a foreground centroid, as shown in FIG. 9 with the coordinates XfC , VfC, which is determined by the centroid location module (not shown), as described above, is preferably defined as the accurate geometric centre of the foreground 38 of the first fingerprint 30. As mentioned previously, the location of this fingerprint centroid may be performed via locating a geometric centre of the second example image 44 which corresponds to the arithmetic mean or average position of all the points or pixels in the foreground 38 of the image 44 with respect to a reference point of the schematic representation 46 of the image 44, as described previously. At 1 06, the first fingerprint 30 is adjusted by any one of its orientation and location by utilizing the data relating to at least one singular point (i.e. estimated centroid) of the first fingerprint. By way of example, at 1 06, data relating to the centroid, such as the second foreground axis Fy and coordinates thereof XfC, yfC, is utilised as an input for a rotation estimation module (not shown - however described in detail in the corresponding PCT Application that claims priority from South African Patent Application No. 2017/05008). The rotation estimation module (not shown) is configured to estimate the orientation angle a between, for example, the longitudinal axis of the foreground Y as described above relative to the second foreground axis Fy, as described above.
The output of the orientation estimation module (not shown) is a data structure comprising the angle a and a rotation direction such as clock-wise or anti-clockwise, as described above, indicating the direction in which the captured image 44 (i.e. combination of the foreground and background thereof) must be rotated by a from the longitudinal axis Y in the direction of the first foreground axis Fy. This data structure may be stored into a database (not shown). As described above, when the determined angle, a, is negative (i.e. when the foreground axis Y lies within the second quadrant of the Cartesian plane with respect to the centroid coordinates), the orientation module (not shown) would rotate the image in the clockwise direction. Equally, when the determined angle, a, is positive (i.e. when the foreground axis Y lies in the first quadrant of the Cartesian plane with respect to the centroid coordinates), the orientation module (not shown) would rotate the image 44 in the anti-clockwise direction.
An orientation adjustment module (not shown - however described in detail in the corresponding PCT Application that claims priority from South African Patent
Application No. 201 7/05008) is configured to generate a corrected image 52 (as shown in FIG. 1 0) by pivoting or rotating the image 44 by the estimated orientation angle a. In this example, the corrected image 52 is generated by pivoting or rotating the captured image 44 of the skewed, first fingerprint 30 by the angle a in the clockwise direction, i.e. in the direction of the foreground axis Fy. The resultant/corrected image 52 will have the longitudinal axis Y thereof taking the place of the second foreground axis Fy (i.e. being substantially upright), and the second foreground axis Fy will accordingly be rotated in the clockwise direction by the estimated angle a, as shown in FIG. 10. As can be seen from FIG. 10 of the drawings, the foreground centroid, which is denoted therein by reference numeral
50, is not aligned with the centre
Figure imgf000021_0001
of the captured image 44, i.e. it is translated/skewed with respect to the centre of the captured image 44. In this regard, it is also provided a translation module (not shown) at 106, which is arranged to, for example, move the centroid 50 of the foreground 38, along with the foreground 38 of the corrected image 52 until the location of the centroid 50 corresponds with the centre, i.e. Xb, yb coordinates, of the captured image 44.
Computing processes of this kind, i.e. rotating the captured image 44 of the first fingerprint 30 and aligning the first fingerprint 30 in the centre of the captured image 44 is commonly referred to by those skilled in the art as fingerprint registration. At 1 08, the minutiae of the adjusted first fingerprint 30 (as shown in FIG. 10) is determined and data relating to such extracted minutiae is generated and compiled in the form of a registered, minutiae query template, at 1 10 (similar to the one depicted in FIG. 1 ).
FIG. 5 illustrates a flow diagram of a second part of the method 100 of comparing or matching the first fingerprint image to a second fingerprint image, as described herein. In accordance with the present invention, in order to verify the subject associated with the first fingerprint 30, the method 100 needs to match the registered template 1 10, hereinafter referred to as a pre-registered, minutiae query template 1 10 to a pre-registered, minutiae reference template 1 1 2 of a second fingerprint image (i.e. reference image). Typically, the data relating to the pre-registered, minutiae reference template 1 1 2 would have been pre-stored on a database. It should be appreciated that before the pre-registered, minutiae reference template 1 12 was stored onto the database, its corresponding captured image (i.e. reference image) would have been subjected to the same steps as explained above with reference to the first fingerprint.
The pre-registered, minutiae query template 1 1 0 and pre-registered, minutiae reference template 1 12 thus serve as inputs 1 1 4 to a fingerprint matching module 1 16. At 1 18, the module 1 16 accesses the database and collects data relating to pre- registered, minutiae reference template 1 1 2, and also collects the data relating to the pre-registered, minutiae query template 1 10 and compares the two 1 10, 1 12 for minutiae correspondences. The output 120 of the matching model 1 16 is thus data relating to a match score 122 based on the number of fingerprint minutiae of the adjusted first fingerprint image that matches the fingerprint minutiae of the second , reference fingerprint image. In its comparison, the matching model 1 16 compares each minutia entry of the pre-registered, minutiae query template 1 10 against each minutia entry of the pre-registered, minutiae reference template 1 1 2. As seen in FIG.5, the newly proposed minutiae matching module 1 16 is made up of only one component, being minutiae correspondence 1 1 8. The registration of templates is done outside the fingerprint minutiae matching module 1 16, a decision that significantly simplifies the matching exercise that needs to be performed by the matching module 1 16, reducing it to only finding minutiae correspondences 1 18.
Minutiae-based matching essentially consists of finding the alignment between the query template 1 10 and a reference template 1 12 that result in the maximum number of minutiae pairings and/or correspondences. This, therefore, implies that template registration is mandatory in order to maximize the number of matching minutiae.
Correctly registering two fingerprint templates certainly requires displacement and rotation to be corrected, however, it does not have to be done on the basis of the minutiae local structures (i.e. ridge endings, bifurcations, and the like). In addition to that, it does not have to be done on the basis of the minutiae global structures (i.e. comparing minutiae local structures of one template with minutiae local structures of another template).
This invention proposes that template registration should be done through fingerprint registration, on the basis of the fingerprint global structures (i.e. any one of a fingerprint core, fingerprint delta, and the centroid of a fingerprint foreground). This implies that a fingerprint should first be optimally registered (i.e. it's orientation and placement must be adjusted), before extracting a minutiae template. Matching high quality fingerprints with small intraclass variations is not difficult and every reasonable algorithm can do it.
For registration based on minutiae local structures, the ridges associated with the minutiae are used to estimate the alignment parameters. This implies that the size of the templates has to be large otherwise the alignment will not be accurate. A large minutiae template takes up a lot of storage space and, upon finding minutiae correspondences, uses up a great deal of computing resources. A large template, however, does not guarantee accurate registration, because of the possibility of the presence of spurious minutiae points. A global process, on the other hand, is free of spurious features and is not as complex as any process based on local structures. This immediately implies global registration has more advantages than local registration.
Global features of a fingerprint (not minutiae) are those formed by the high-level ridge pattern, while the local features are those formed by the lower-level ridge pattern. Fingerprint singular points are examples of global features, while fingerprint minutiae are examples of local features. From the captured fingerprint, the first thing is to locate a global reference point to be used for rotation and translation correction as described above.
FIG. 6 shows a high-level block diagram illustrating a system 100 for comparing or matching a first fingerprint image to a second fingerprint image. A fingerprint reader
1 24 captures an image relating to the first fingerprint. Data relating to the captured image of a fingerprint is then subjected to a plurality of computing processes 1 26, as described above with reference to the method 1 00, which is then stored onto a first database 128. The system 100, preferably, comprises the first database 128 and a processor 1 30 which is connected to the first database 128. A memory (not shown) is connected to the processor 130 and is configured to utilise the data relating to the fingerprint image, and has instructions, i.e. a fingerprint matching algorithm 1 16, stored thereon which is configured to be executed by the processor 130. In FIG. 6, the minutiae reference template 1 1 2, which is described above, is arranged to be stored onto a second database 132.
It will be appreciated that a backend 134 for an operator may be utilised comprising the processor 130 and the second database 1 32. However, other embodiments may be possible wherein the first database 128 and second database 1 30 form a single database. The invention as described hereinabove, describes an approach that uses global structures, such as the singular points, in particular the centroid of a foreground of the captured image of a fingerprint for registration (i.e. orientation and placement), 5 and minutiae global structures for matching. This approach generates a pre- registered, minutiae template for both storage and comparison, and trivializes the matching exercise because it reduces it to just finding minutiae correspondences. In addition, it allows for the creation of a compact minutiae template, allows for accurate and seamless registration, and is free from spurious (often local) features. o
It will be appreciated that there are many variations in detail on the invention as herein defined and/or described without departing from the scope and spirit of this disclosure. 5

Claims

1. A method of pre-registering a minutiae template for use in matching fingerprint images, the method comprising:
providing data relating to a captured image of a fingerprint; generating data relating to at least one singular point of the fingerprint; adjusting either the orientation or location or both the orientation and location of the fingerprint by utilizing the data relating to the at least one singular point of the fingerprint; and
generating data relating to fingerprint minutiae of the adjusted fingerprint image.
2. The method of claim 1 , including the step of storing, in a database, the generated data relating to the fingerprint minutiae of the adjusted first fingerprint image.
3. The method of claim 2, wherein the step of generating data relating to at least one singular point of the fingerprint includes establishing a centroid of a foreground of the captured image of the fingerprint, wherein the step of establishing the centroid of the foreground of the captured image of the fingerprint comprising:
segmenting a foreground and background of the captured image; and estimating a centroid of the foreground with respect to a predefined reference point located in one of the foreground and background of the captured image of the fingerprint.
4. The method of claim 3, wherein the step of estimating the centroid of the foreground comprises determining the coordinates of the centroid of the foreground, which coordinates are the mean values of the coordinates of the points of at least the foreground with respect to the predefined reference point, the step of estimating the centroid including the steps of:
defining a first reference point coordinate and a second reference point coordinate of the predefined reference point located in one of the background or foreground of the captured image, preferably the background of the captured image, wherein the first and second reference point coordinates define a first reference point axis and a second reference point axis that is transverse to the first reference point axis;
conducting a first navigation with respect to the predefined reference point including: calculating a first value in a first direction along the first reference point axis between the first reference point coordinate and one of the interface of the background and foreground of the captured image and periphery/edge of the background of the captured image; and calculating a second value in a second direction that is opposite the first direction along the first reference point axis between the first reference point coordinate and one of the interface of the background and foreground of the captured image and periphery/edge of the background of the captured image;
calculating a third value that is the average between the first value and the second value with respect to the first reference point coordinate, to define a first navigation reference point having a first navigation coordinate defining a first navigation axis that is coaxial with the first reference point axis, and having a second navigation coordinate that corresponds with the second reference point coordinate and defines a second navigation axis that is transverse to the first navigation axis; conducting a second navigation with respect to the first navigation reference point including:
calculating a fourth value in a third direction along the second navigation axis between the second navigation coordinate and one of the interface of the background and foreground of the captured image and periphery/edge of the background of the captured image; and calculating a fifth value in a fourth direction that is opposite the third direction along the second navigation axis between the second navigation coordinate and one of the interface of the background and foreground of the captured image and periphery/edge of the background of the captured image; and
calculating a sixth value that is the average between the fourth value and the fifth value with respect to the second navigation coordinate, to define a second navigation reference point having a third navigation coordinate defining a third navigation axis that is parallel to the first navigation axis, wherein the third navigation coordinate corresponds with the first navigation coordinate, and wherein the second navigation reference point has a fourth coordinate defining a fourth navigational axis that is coaxial with the second navigation axis.
5. The method of claim 4, wherein the step of estimating the centroid further comprises:
conducting a third navigation with respect to the reference point including:
calculating a seventh value in a fifth direction along the second reference point axis between the second reference point coordinate and one of the interface of the background and foreground of the captured image and periphery/edge of the background of the captured image; and calculating an eighth value in a sixth direction that is opposite the fifth direction along the second reference point axis between the second reference point coordinate and one of the interface of the background and foreground of the captured image and periphery/edge of the background of the captured image;
calculating a ninth value that is the average between the seventh value and the eighth value with respect to the second reference point coordinate, to define a third navigation reference point having a fifth navigation co-ordinate defining a fifth navigation axis that is coaxial with the second reference point axis, and having a sixth navigation coordinate that corresponds with the first reference point coordinate and defines a sixth navigation axis that is transverse to the fifth navigation axis;
conducting a fourth navigation with respect to the third navigation reference point including:
calculating a tenth value in a seventh direction along the sixth navigation axis between the sixth navigation coordinate and one of the interface of the background and foreground of the captured image and periphery/edge of the background of the captured image; and calculating an eleventh value in an eighth direction that is opposite the seventh direction along the sixth navigation axis between the sixth navigation coordinate and one of the interface of the background and foreground of the captured image and periphery/edge of the background of the captured image; and
calculating a twelfth value that is the average between the tenth value and the eleventh value with respect to the sixth navigation coordinate, to define a fourth navigation reference point having a seventh navigation coordinate that corresponds with the fifth navigation coordinate, and wherein the fourth navigation reference point has an eighth coordinate defining an eighth navigational axis that is coaxial with the sixth navigation axis.
6. The method of claim 5, wherein the step of estimating the centroid of the foreground further comprises:
determining the average of the third value and twelfth value to establish a thirteenth value that corresponds with a first co-ordinate of the centroid; and determining the average of the sixth value and ninth value to establish a fourteenth value that corresponds with a second co-ordinate of the centroid, wherein the first coordinate and second coordinate of the centroid are x and y coordinates of the Cartesian plane.
7. The method of claim 6, wherein the step of adjusting the orientation, placement, or both the placement and orientation of the fingerprint includes estimating an angle of orientation of a predefined point of the foreground of the captured image of the fingerprint with respect to a first foreground axis of the estimated foreground centroid, wherein the step includes defining the predefined point of the foreground of the captured image as an edge of the foreground, wherein the step of defining the edge of the foreground of the captured image includes the step of determining a location of the edge of the foreground, wherein the step of determining the location of the edge of the foreground includes:
navigating in a ninth direction from the estimated centroid along the first foreground axis between the centroid and an interface of the foreground and background of the captured image to determine a fifth navigation reference point having an ninth navigation coordinate that corresponds with the value of the x-coordinate of the estimated centroid, the ninth navigation coordinate defining a ninth navigational axis that is transverse to the first foreground axis, and wherein the fifth navigation reference point has a tenth navigation coordinate having a fifteenth value;
navigating in a tenth direction along the ninth navigational axis between the ninth navigation coordinate and an interface of the foreground and background if it exists to determine a sixth navigational reference point having a twelfth navigation coordinate having a sixteenth value, and a thirteenth navigation coordinate that corresponds with the value of the tenth navigation coordinate; whereas if it does not exists, navigate in an eleventh direction that is opposite the tenth direction along the ninth navigational axis between the ninth navigation coordinate and an interface of the foreground and background to determine a seventh navigational reference point having a fourteenth navigation coordinate having a seventeenth value, and a fifteenth navigation coordinate that corresponds with the value of the tenth navigation coordinate, wherein either one of the sixteenth value or seventeenth value defines an upper edge of the skewed foreground; and
determining the angle of orientation defined by the arctan of the fraction of a numerator which is defined by the difference of the sixteenth value and fifteenth value or seventeenth value and fifteenth value, and the denominator defined by the difference between the fifteenth value and fourteenth value.
8. The method of claim 7, including the step of determining the rotation direction and accordingly rotating the captured image of the fingerprint in that direction such that the upper edge of the corrected image of the fingerprint is substantially upright with respect to a vertical axis.
9. The method of claim 8, wherein the step of determining the rotation direction includes establishing whether the value of the determined angle of orientation is positive or negative, wherein a negative value of the angle of orientation indicates that the foreground is skewed in the anticlockwise direction and that the rotation direction for correcting the orientation of the fingerprint is clockwise, and wherein a positive value of the angle of orientation indicates that the foreground is skewed in the clockwise direction and that the rotation direction for correcting the orientation of the fingerprint is anticlockwise.
10. The method of claim 9, including the step of moving the centroid along with the foreground to a location of the predefined reference point to correct the placement of the fingerprint in the captured image.
11. A system of pre-registering a minutiae template for use in matching fingerprint images, the system comprising:
a processor; and
a memory connected to the processor, the memory containing instructions which when executed by the processor, cause the processor to: provide data relating to a captured image of a fingerprint; generate data relating to at least one singular point of the fingerprint; adjust either the orientation or location or both the orientation and location of the fingerprint by utilizing the data relating to the at least one singular point of the fingerprint; and
generate data relating to fingerprint minutiae of the adjusted fingerprint.
12. A non-transitory computer readable device storing instructions thereon which when executed by a processor of a computing device performs the functions of: providing data relating to a captured image of a fingerprint; generating data relating to at least one singular point of the fingerprint; adjusting either the orientation or location or both the orientation and location of the fingerprint by utilizing the data relating to the at least one singular point of the fingerprint; and
generating data relating to fingerprint minutiae of the adjusted fingerprint.
13. A method of matching fingerprint images, the method comprising the steps of: providing data relating to a pre-registered, minutiae query template of a first, query fingerprint image;
providing data relating to a pre-registered, minutiae reference template of a second, reference fingerprint image;
comparing the data relating to the pre-registered, minutiae template and pre-registered, minutiae reference template; and
generating data relating to a match score based on the number of fingerprint minutiae of the pre-registered, minutiae query template that matches the fingerprint minutiae of the pre-registered, minutiae reference template of the second, reference fingerprint image.
14. The method of claim 1 3, wherein the step of providing the data relating to the pre-registered, minutiae query template of a first fingerprint image comprises: providing data relating to a first captured image of the first fingerprint; generating data relating to at least one singular point of the first fingerprint;
adjusting either the orientation or location or both the orientation and location of the first fingerprint by utilizing the data relating to the at least one singular point of the first fingerprint; and
generating data relating to fingerprint minutiae of the adjusted first fingerprint.
15. The method of claim 14, including the step of storing, in a database, the generated data relating to the fingerprint minutiae of the adjusted first fingerprint image.
16. The method of claim 15, wherein the step of providing data relating to the pre- registered, minutiae reference template includes any one or more of the following steps:
providing data relating to a second captured image of the second fingerprint for use as a reference image;
generating data relating to at least one singular point of the second fingerprint;
adjusting either the orientation or location or both the orientation and location of the fingerprint by utilizing the data relating to the at least one singular point of the fingerprint; and
generating data relating to fingerprint minutiae of the adjusted second fingerprint.
17. The method of claim 16, including the step of storing, in a database, the generated data relating to the fingerprint minutiae of the adjusted second fingerprint image.
18. The method of claim 1 7, including any one or more of the following steps:
capturing the first and/or second images of the first and second fingerprints, respectively;
adjusting the contrast of the first and/or second captured images;
separating or segmenting foregrounds of the first and/or second captured images from backgrounds of the first and/or second captured images, respectively;
enhancing ridges of the first and/or second fingerprints;
converting the first and/or second captured images into binary version(s); estimating the frequency(ies) of ridge(s) of the first and/or second ured images; and
reducing the thickness of ridges of the first and/or second fingerprints.
The method of claim 18, wherein the data relating to fingerprint minutiae of the adjusted first fingerprint includes any one of the location, direction and category of the minutiae of the first fingerprint.
The method of claim 1 9, comprising presenting the data relating to fingerprint minutiae of the first fingerprint as a pre-registered, minutiae query template.
The method of claim 20, wherein the data relating to fingerprint minutiae of the second fingerprint includes any one of the location, direction and category of minutiae of the second fingerprint; and presenting the data relating to fingerprint minutiae of the second fingerprint as a pre-registered, minutiae reference template.
22. A system for matching fingerprint images, the system comprising:
a processor; and
a memory that is connected to the processor, the memory containing instructions which when executed causes the processor to:
provide data relating to a pre-registered, minutiae query template of a first, query fingerprint image and data relating to a pre-registered, minutiae reference template of a second, reference fingerprint image;
compare the data relating to the pre-registered, minutiae query template of the first, query fingerprint image and the data relating to the pre- registered, minutiae reference template of the second, reference fingerprint; and generate data relating to a match score based on the number of fingerprint minutiae of the pre-registered, minutiae query template that matches the fingerprint minutiae of the pre-registered, minutiae reference template of the second, reference fingerprint image.
23. The system of claim 22, comprising a database on which is stored the pre- registered, minutiae query template of a first, query fingerprint image and a pre- registered, minutiae reference template of a second, reference fingerprint image.
24. A non-transitory computer-readable device storing instructions thereon which when executed by a processor of a computing device performs the functions of: providing data relating to a pre-registered, minutiae query template of a first, query fingerprint image;
providing data relating to a pre-registered, minutiae reference template of a second, reference fingerprint image;
comparing the data relating to the pre-registered, minutiae template and pre-registered, minutiae reference template; and
generating data relating to a match score based on the number of fingerprint minutiae of the pre-registered, minutiae query template that matches the fingerprint minutiae of the pre-registered, minutiae reference template of the second, reference fingerprint image.
PCT/IB2018/055500 2017-07-24 2018-07-24 Method of and system for matching fingerprint iimages WO2019021177A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
ZA2020/01146A ZA202001146B (en) 2017-07-24 2020-02-24 Method of and system for matching fingerprint images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
ZA2017/05007 2017-07-24
ZA201705007 2017-07-24

Publications (1)

Publication Number Publication Date
WO2019021177A1 true WO2019021177A1 (en) 2019-01-31

Family

ID=65040434

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2018/055500 WO2019021177A1 (en) 2017-07-24 2018-07-24 Method of and system for matching fingerprint iimages

Country Status (2)

Country Link
WO (1) WO2019021177A1 (en)
ZA (1) ZA202001146B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1227429B1 (en) * 2001-01-29 2006-08-02 Nec Corporation Fingerprint identification system and method
CN103077377B (en) * 2012-12-31 2015-07-29 清华大学 Based on the fingerprint correction method of field of direction distribution

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1227429B1 (en) * 2001-01-29 2006-08-02 Nec Corporation Fingerprint identification system and method
CN103077377B (en) * 2012-12-31 2015-07-29 清华大学 Based on the fingerprint correction method of field of direction distribution

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ELLINGSGAARD ET AL.: "Fingerprint Alteration Detection", 30 June 2013 (2013-06-30), pages FP - 116, XP055566888, Retrieved from the Internet <URL:http://www2.imm.dtu.dk/pubdb/views/edoc_download.php/6601/pdf/imm6601.pdf> [retrieved on 20181016] *
QL ET AL.: "Fingerprint matching combining the global orientation field with minutia", PATTERN RECOGNITION LETTERS, vol. 26, 17 June 2005 (2005-06-17), pages 2424 - 2430, XP005097873, Retrieved from the Internet <URL:http://www.nlpr.ia.ac.cn/2005papers/gjkw/PR%20letters-wangyangsheng.pdf> [retrieved on 20181019] *

Also Published As

Publication number Publication date
ZA202001146B (en) 2022-05-25

Similar Documents

Publication Publication Date Title
EP2833294B1 (en) Device to extract biometric feature vector, method to extract biometric feature vector and program to extract biometric feature vector
Jain et al. Filterbank-based fingerprint matching
JP4303410B2 (en) Pattern center determining apparatus, pattern direction determining apparatus, pattern positioning apparatus, and pattern collating apparatus
EP2528018B1 (en) Biometric authentication device and biometric authentication method
JP5930023B2 (en) Biometric authentication apparatus, biometric authentication method, and biometric authentication computer program
CN107958443B (en) Fingerprint image splicing method based on ridge line characteristics and TPS deformation model
US10460207B2 (en) Image processing device, image processing method and computer-readable non-transitory medium
Oldal et al. Hand geometry and palmprint-based authentication using image processing
EP3223193B1 (en) Image processing device, image processing method and image processing program
CN116704557A (en) Low-quality fingerprint matching method based on texture information
Hong et al. Identity authentication using fingerprints
JP6349817B2 (en) Alignment apparatus, alignment method, and computer program for alignment
WO2019021177A1 (en) Method of and system for matching fingerprint iimages
Hosseinbor et al. An unsupervised 2D point-set registration algorithm for unlabeled feature points: application to fingerprint matching
Zang et al. Evaluation of minutia cylinder-code on fingerprint cross-matching and its improvement with scale
Singh et al. A line feature approach to finger knuckle image recognition
Burgues et al. Detecting invalid samples in hand geometry verification through geometric measurements
Pintavirooj et al. Fingerprint verification and identification based on local geometric invariants constructed from minutiae points and augmented with global directional filterbank features
KR100564762B1 (en) Authentication method and apparatus using fingerprint
Zanganeh et al. Partial fingerprint identification through correlation-based approach
Maltoni et al. Fingerprint Matching
Khalid et al. Palmprint ROI Cropping Based on Enhanced Correlation Coefficient Maximisation Algorithm
Ogundepo et al. Development of a real time fingerprint authentication/identification system for students’ record
Msiza et al. Novel fingerprint re-alignment solution that uses the TFCP as a reference
Bhattacharya et al. Fingerprint recognition using minutiae extraction method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18839435

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18839435

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 18839435

Country of ref document: EP

Kind code of ref document: A1