WO1993007584A1 - Method and system for detecting features of fingerprint in gray level image - Google Patents

Method and system for detecting features of fingerprint in gray level image Download PDF

Info

Publication number
WO1993007584A1
WO1993007584A1 PCT/US1992/008446 US9208446W WO9307584A1 WO 1993007584 A1 WO1993007584 A1 WO 1993007584A1 US 9208446 W US9208446 W US 9208446W WO 9307584 A1 WO9307584 A1 WO 9307584A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
point
direction
fingerprint
gt
Prior art date
Application number
PCT/US1992/008446
Other languages
French (fr)
Inventor
Xuening Shen
Original Assignee
Cogent Systems, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US772,393 priority Critical
Priority to US77239391A priority
Application filed by Cogent Systems, Inc. filed Critical Cogent Systems, Inc.
Publication of WO1993007584A1 publication Critical patent/WO1993007584A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/03Detection or correction of errors, e.g. by rescanning the pattern
    • G06K9/036Evaluation of quality of acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00006Acquiring or recognising fingerprints or palmprints
    • G06K9/00067Preprocessing; Feature extraction (minutiae)
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual entry or exit registers
    • G07C9/00126Access control not involving the use of a pass
    • G07C9/00134Access control not involving the use of a pass in combination with an identity-check
    • G07C9/00158Access control not involving the use of a pass in combination with an identity-check by means of a personal physical data

Abstract

A method and automatic system (Fig. 1) for extracting both ordinary and unique features of a stripe pattern like a fingerprint from a gray level image without binary processing. A precise direction array (fig. 3) with the same number of image points as the image is generated by calculating the average direction of local ridges for every point in the image with a quick recurrent algorithm. There is also generated a curvature array in which each element presents the accuracy of the local average direction at corresponding points in the direction array and image. A region of clear ridges extracted from background and noise before detecting image features. The ridge trends (Figs. 8a-8e) and forkedness of a point are decided by analyzing the distribution of ridge directions on a circle around the point. For finding the cores and deltas, trend analysis is used for each singularity that is a maximum point on the curvature array. The coordinate axis that is consistent for various fingerprint types is decided by analyzing the structure of the direction array macroscopically.

Description

METHOD AND SYSTEM FOR DETECTING FEATURES OF FINGERPRINT

IN GRAY LEVEL IMAGE

1. BACKGROUND OF THE INVENTION

1.1 Field Of The Invention

This invention relates to the automatic detection of both the common features (i.e. cores, deltas and minutiae) and the unique features (shape and global features) of a fingerprint by processing a gray level image of the fingerprint.

1.2 Description Of The Prior Art

The history of identifying and verifying individuality according to dermatogliphic features is very long. Chinese people used to print their palms and fingers on documents and contracts as credit even since the seventh century A.D.. Although there are many other methods for identifying individuals today, fingerprint identification is still the most widespread and credible. However, since fingerprints became a legal identifier of persons about one hundred years ago, the number of fingerprint records has grown very quickly and manual management of the files has become very difficult. As a result, many automatic and semi-automatic systems for processing, recognizing, searching, and identifying fingerprints have been proposed.

The most widely used method for detecting features in many present automatic fingerprint identification systems is based upon binary image processing. A binary image is one in which each image element has one of only two binary values, e.g. 0 or 1. The key procedures in such processing are image enhancing, binarizing, thinning, smoothing and modifying. The minutiae of a fingerprint are detected by scanning the thinned binary image with a 3×3 window. Usually, the cores and deltas of fingerprints are also detected by scanning a binary image. A core exists where one or more lines of a fingerprint form a closed path, frequently a circle, or undergo an abrupt 180° direction change. A delta, as used herein, exists when three ridge lines meet at a common point. A delta may be more accurately referred to as a Y.

In U.S. Pat. No. 4,083,035, an apparatus is provided for detecting the position (X and Y) and orientation angle (θ) of minutiae in a binary data bit stream of a 256×256 thinned image. The minutia orientation detector obtains an 8-bit vector average of all local angles present in each of a plurality of 8x8 bit windows across the image. This vector average of all of the local angles within a given 8x8 bit window is the orientation angle θ for each minutia that is positioned within that given 8x8 bit window. There are 32×32 such windows on the image, i.e. a 32x32 ridge orientation array will be generated.

In U.S. Pat. No. 4,310,827, the minutia direction of an ending is defined as the direction of a single direction vector drawn from the ending to an arrival point, i.e. a skeleton point located by tracing a predetermined accurate length from the ending. The direction of a bifurcation is defined by a direction symmetrical to the average vector direction of three arrival points.

In U.S. Pat. No. 4,156,230, a 7x7 template scanning window is passed electronically over a 29x29 sub-array of the 32x32 ridge contour data as in U.S. Pat. No. 4,083,035 to generate a set of correlation values corresponding to each contour data element and to a plurality of reference angle vectors. The correlation values are processed for determination of peaks and valleys. The resultant data, representing the number of correlation peaks and the direction of each, provides 32 values which define the location and angular orientation of cores and deltas of a fingerprint.

In U.S. Pat. No. 4,151,512, the topological data, identifying singularity points such as tri-radii (i.e. deltas) and cores, as well as ridge flow line tracings related to those points, are extracted from a 32×32 ridge contour array as in U.S. Pat. No. 4,083,035. Subsequent to making the first cell tracing in any one direction from a tri-radius or core point, the information from the ridge contour array is used to supply additional angle data to continue each trace. Some logic circuits determine the next row and column address incremental values according to a specification chart. The maximum length of a trace is 48 cells on the ridge contour array. Based upon the number of singularities located, an initial classification can be made wherein an arch is identified if no tri-radii are located, a whorl may be identified if two tri-radii are located and a general loop type may be identified if one tri-radius is located. The loop type pattern is classified according to the direction and size of the flow tracings by comparing them with a set of prestored references.

The following publications are also of relevance to the present invention:

Shen, "Several local properties of digital picture and their applications to the extraction of descriptive information of fingerprints", Acta Scientiarum Naturalium, Universitatis Pekinensis, No. 3, 1986, pp. 38-51. (In Chinese).

Shen, "The digital pseudo-curvature and its applications",

Applied Mathematics, Sept. 1988, No. 3, Vol. 3, pp. 382-391.

(In Chinese).

Shen et al., "A Similarity Measurement and Classification of Fingerprint", Proc. of 4th Chinese Conf. on Pattern

Recognition and Machine Intelligence, 1984. (In Chinese).

1.3 Problems In The Prior Art

The problems listed here below relate to the manner of identifying, or designating, cores, deltas and the shapes of fingerprints based upon ridge directions in the patents cited above.

There are two problems in calculating ridge direction array: (1) Usage of the same value of ridge direction for every point in an 8×8 window will produce serious errors when the window is in a region where ridges curve significantly. (2) There is no measurement provided for representing the accuracy of each average ridge direction.

In U.S. Pat. No. 4,310,827, the direction of a minutia depend on the arrival points. So the direction will be effected if any arrival point can not be found or if the skeleton ridge are not smooth enough.

There are four problems in analyzing ridge trends: (1) 7×7 window in a ridge contour array, i.e. a 56×56 window in the original image, is too large to find cores and. deltas of small whorls, or loops. (2) The fixed window size is not suitable for various types of cores and deltas. (3) The angular orientation with 32 values as well as cores and deltas with 29×29 positions are not accurate enough. (4) As many as 841 (=29×29) elements have to be analyzed for every fingerprint.

There are three problems in ridge flow tracing: (1) Each step in ridge flow tracing passes 8 points because every element in the ridge contour array refers to an 8×8 region in the image. This is too large for tracing at regions where ridges curve significantly. (2) The errors of position and direction are not accumulated to correct the trace. (3) The next step may be wrong when a core, delta or noise region is touched in tracing.

There are three problems in classification: (1) The initial classification based upon the number of deltas may be wrong in case an existing delta can not be found. (2) The loop sub-classification by comparing the rough flow tracing is sensitive to the initial fingerprint impression. (3) There is no sub-classification for whorls.

Finally, the main factor which effects the accuracy of fingerprint features extracted by binary processing is that much original information in a gray level image of the fingerprint may be lost after binarizing. 2. SUMMARY OF THE INVENTION

2.1 Objects Of The Invention

It is therefore a main object of the present invention to extract cores, deltas, minutiae, and shape and global features of a fingerprint from a gray level image based on original information as much as possible.

It is a more specific object of this invention to provide a quick algorithm for calculating the average direction of local ridges at every point of a fingerprint and for generating a precise direction array.

Another object of the invention is to provide a measurement, termed local curvature, representing the accuracy of each local direction that is easy to calculate.

Another object of the invention is to provide a method for separating a region of clear ridges from background and noise in the image.

Another object of the invention is to provide a method for analyzing the ridge flow trends around any point in the image to decide its trend directions and forkedness.

Another object of the invention is to find the cores and deltas of a fingerprint by analyzing the trends only for each singularity of the image rather than analyzing the trends for every point of a direction array.

Another object of the invention is to locate the position of the center and central orientation of a plain arch of a fingerprint.

Another object of the invention is to establish a coordinate axis of any fingerprint that is consistent for various types and shapes of fingerprints.

Another object of the invention is to accurately trace shape lines, contour lines and normal lines of a fingerprint.

Another object of the invention is to classify fingerprints according to the structural relations among shape lines.

Another object of the invention is to extract shape features from the shape lines that are consistent for both whorls and loops, and to further classify fingerprints according to the shape features.

Another object of the invention is to extract global features of any fingerprint, including plain arch, however imperfect or partial it is and whatever type or shape it has.

Another object of the invention is to calculate the global difference between two fingerprints to finely classify and distinguish them. Another object of the invention is to detect minutiae and their attributes from gray level images of fingerprints.

Another object of the invention is to calculate both the quality level and vector of fingerprints with regard to several aspects, for example noise level, area of clear region, position of center, number of minutiae, etc.

2.2 BRIEF DESCRIPTION OF THE DRAWING

FIG. 1 is a processing flow diagram showing the sequence of basic steps characterizing the present invention.

FIG. 2 shows a point of an image and its neighborhood used to explain the calculation of local ridge direction and curvature according to the invention.

FIG. 3 shows a point and its four adjacent points for calculating four gradient models.

FIGS. 4a-4e show a symmetric convex region and its four subsets for calculating four average gradient models.

FIGS. 5a and 5b show the neighborhoods of two adjacent points and their common area as well as parts of the neighborhoods of two adjacent points and their common points.

FIGS. 6a-6d show various octagonal regions of clear ridges in a fingerprint image.

FIGS. 7a-7d show various types of fingerprint core patterns.

FIG. 7e shows a typical fingerprint delta pattern.

FIGS. 8a-8e show the ridge trends of various singularities and analysis circles.

FIGS. 9a-9e show the difference values within trend analyzing.

FIGS. 10a-10r show the shape lines of various fingerprints for 18 classes of shapes.

FIG. 11 shows the macroscopic structure of peripheral ridges of a fingerprint.

FIG. 12 shows a vault line and normal lines for locating the center of the coordinate axes of a fingerprint.

FIG. 13 shows a manner of determining the central orientation of a plain arch. FIG. 14 illustrates the extraction of shape features extracted on shape lines.

FIG. 15 illustrates the extraction of global features of a fingerprint.

FIGS. 16a-16e show various minutiae.

FIGS. 17a-17d show the basic features of minutiae in terms of a ridge or valley respectively near a core or delta.

FIGS. 18a and 18b show two neighboring points in a tracing.

FIGS. 19-24 are pictorial views illustrating various tracing operations according to the invention on gray level fingerprint images, where FIGS. 19, 20 and 21 depict the tracing of lines for locating the center and center orientation of fingerprint patterns containing a whorl, a loop and an arch, respectively; and FIGS. 22, 23 and 24 depict the detection of global features for an arch, a loop and a whorl, respectively, according to the invention on the basis of the local directions of ridge lines of the fingerprint image patterns at points arranged along concentric circles.

FIG. 25 is a pictorial view illustrating extraction of minutiae from the region of an arch in a gray level fingerprint image.

2.3 GENERAL DEFINITIONS

There are many constants, parameters, variables and functions used herein. Some of these are defined in the C programming language as follows:

x[ ] means an array named by x; it can be defined as a set in which

x[ ] = {x[0], ..., x[m-1]},

where m>0.

x[ ][ ] means a two dimensional array, or matrix; it can be defined as a set in which

x[ ][ ] = (x[0][0], ..., x[m-1][n-1]),

where m>0, n>0. When each member of such a matrix is of the type x[i][j], then, for each member, i is called the line, or row, number and j is called the column number. x = C ? y : z; means that if condition C is true, then x=y; else x=z.

sign (x) = (x <0)? -1 : 1;

int(x) means the largest integer that is not greater than x. Therefore, if x>0, then int(x+0.5) means the most approximate integer of x.

x%y means the remainder, that is not smaller than 0, when x is divided by y.

(x,y) means a digital point at coordinates x and y of a plane. It can also mean, in the appropriate context, a vector from origin point (0,0) to (x,y).

dv(X) means the direction of a vector X, which direction is in the range [0, 2 - π] .

#A means the cardinal or the number of members in set A. Σ(y(X), X, A) means the sum of values y(X) for every member X in set A.

|x| means the model or absolute value of x. if x is a number then

|x| = (x<0)? -x : x;

if x=(x1, ... ,xn) is a vector, then

|x| =

Figure imgf000010_0001

p_x means a parameter which may be predetermined or calculated for use in performing computations according to the invention. The range of each predetermined parameter will may be used in preferred embodiments of the invention will be listed below.

π = 3.14159;

p_π = 252. DESCRIPTION OF THE PREFERRED EMBODIMENTS

3.1 General Procedure

Referring to FIG.l, the general procedure of this invention for extracting as many features as possible of a fingerprint from a gray level image without binary processing is shown. The significant steps in an exemplary method according to the invention are:

1) Input and Pre-processing

The input digital image has L rows and K columns of image elements, as shown in FIG.1. The intensity, or brightness level, in each image element has 3 to 1024, and preferably 256, gray levels and the image element density is 500 dpi (dots, or image elements, per inch) in each coordinate direction. For the further processing, the original range of gray levels of the fingerprint image is transformed into a uniform range.

2) Calculating Direction Array and Curvature Array

A direction array and a curvature array are calculated for each image point by a quick recurrent algorithm, possibly with the aid of some tables. The direction array and curvature array values for an image represent the average ridge direction and its accuracy or variance at each point of the image.

3) Cleaning Background

Segmenting an equiangular, or regular, octagonal region of clear ridges of a fingerprint from the background by eight straight lines according to the curvature array, and excavating all large conjunct regions of noise points in the octagon. All further processing is within this region.

4) Trend Analyzing

The ridge trends and forkedness of any point can be determined by analyzing the distribution of ridge directions around it on circles with different radii.

5) Finding Cores and Deltas

All singularities are found by scanning the curvature array with a 3×3 window to find all maximum curvature points. All cores and deltas are located by analyzing the trends for each singularity.

6) Line Tracing

Contour lines, shape lines and normal lines of a fingerprint are traced, based on the direction array. accurately by accumulating the errors of coordinates and directions to correct the trace point by point.

7) Deciding the Coordinate Axis

By macroscopically analyzing the structure of a direction array, the fingerprint is centered on a coordinate axis system; meanwhile the central orientation of the fingerprint is selected on the basis of the trend at the center.

8) Classification

A fingerprint is classified into one of 18 classes according to the structural relations among the shape lines.

9) Extracting Shape and Global Features

For classifying and fast searching, a few shape features are extracted consistently for various fingerprint classes except plain arch from shape lines. Furthermore a plurality of global features are extracted consistently for various fingerprint classes with reference to the coordinate axes.

10) Detecting Minutiae

Based on the description of minutiae when the shape lines are valleys, all minutiae are detected by tracing each valley of the fingerprint in the gray level image. Each minutia is represented by its x, y coordinates and direction θ.

11) Quality Checking

A synthetic quality level and quality vector of fingerprints is presented based on the position of the center, number of minutiae, noise level, area of clear region, etc. in order to decide automatically, or to suggest to the operator, whether to accept, reject or reevaluate the fingerprint.

3.2 Direction Array and Curvature Array

All features of the fingerprint, including ordinary features (cores, deltas, minutiae) or novel features (shape and global features) are referenced to the average direction of ridges in a small region of the fingerprint. The calculation of local direction is very important for image processing and feature extraction of fingerprints. In this invention, an array whose every element represents an average direction of the textures in a small region of the image is called a direction array.

There are three features of the method provided herein for calculating a precise direction array:

First, the direction array is calculated directly from the gray level image of the fingerprint, so the original information will be used as much as possible.

Second, the direction array is calculated point by point in the image, i.e. every element in the array is a local average direction of just one point in the image that is calculated on the basis of a neighborhood of the point, this is necessary spatially for regions on the fingerprint where the directions of ridges change greatly or the curvatures are very high.

Third, to represent the accuracy of the local direction at each point, a local average curvature of the textures in the neighborhood of the same point is also calculated, and a curvature array of the image is generated therefrom. The usage of local direction at a point should refer to the local curvature as the relation between an average value and its variance. A local curvature in some regions of a fingerprint, for example at a core, delta, scar or noise, will have very high curvature values which signify that local direction is meaningless there due to inconsistency of ridge directions and/or indistinct textures in that region. Generally, lower curvature values mean better accuracy of the direction value at a point.

For a given image area S, which may include the entire fingerprint image or any selected portion thereof, there are derived four summation gradient values vi (i=1, 2, 3, 4), each representing the sum of the absolute values of the difference, gi(X), in gray scale image point values, f(X), between each pair of points, X and X-Qi, in S for which Qi is a selected vector. Thus, as will be seen from the example to be described, depending on the value of Qi, vi is representative of the degree of change in the image across the image area in direction Qi. Moreover, vi will be particularly relevant to the point P at the center of S because there is the greatest probability that it is the gradient condition at P which is described by vi .

Table 1, below, provides an exemplary gray scale value matrix representing the gray values f(X) at respective points, X, of an image area S. Here, each point X has a horizontal coordinate k and a vertical coordinate 1. As is apparent from Table 1, the origin of the coordinate system will be somewhere above and to the left of the illustrated image area.

Each point X is represented by a pair of coordinates k, 1. In Table 1, the k and 1 coordinates at the center of S are n and m.

Table 1

S k n-5 n-4 n-3 n-2 n-1 n n+1 n+2 n+3 n+4 n+5 1

m-5 13 14 15 15 15 14 12 11 9 7 6 m-4 14 14 14 13 12 10 6 4 2 2 3 m-3 13 11 9 6 6 4 3 3. 4 6 8 m-2 10 7 4 1 2 2 3 4 6 9 11 m-1 5 4 3 0 2 4 6 8 11 13 14 m 3 4 5 6 8 9 11 12 14 15 15 m+1 4 6 9 10 12 14 15 15 15 14 12 m+2 7 9 12 13 14 14 15 14 13 10 8 m+3 13 14 14 14 14 13 10 8 8 6 4 m+4 15 15 14 13 11 8 3 2 5 5 5 m+5 14 12 10 8 7 4 0 1 5 7 8

For each vector Qi there is produced a set Si containing all points X for calculating gi (X)=|f(X)-f(X-Qi)|. The points in each set Si are obtained by using the corresponding values for Qi, i.e., S1 is obtained by using Q1, S2 by using Q2, etc.. The values for gi(X) in each set Si are given in Tables 2-5, below, where Qi have the following k,1 coordinate values,

Q1 = (1,0);

Q2 = (1,1);

Q3 = (0,1);

Q4 = (-1,1). Table 2

S1 k n-5 n-4 n-3 n-2 n-1 n n+1 n+2 n+3 n+4 n+5 1

m-5 1 1 0 0 1 2 1 2 2 1 m-4 0 0 1 1 2 4 2 2 0 1 m-3 2 2 3 0 2 1 0 1 2 2 m-2 3 3 3 1 0 1 1 2 3 2 m-1 1 1 3 2 2 2 2 3 2 1 m 1 1 1 2 1 2 1 2 1 0 m+1 2 3 1 2 2 1 0 0 1 2 m+2 2 3 1 1 0 1 1 1 3 2 m+3 1 0 0 0 1 3 2 0 2 2 m+4 0 1 1 2 3 5 1 3 0 0 m+5 2 2 2 1 3 4 1 4 2 1

Table 3

S2 k n-5 n-4 n-3 n-2 n-1 n n+1 n+2 n+3 n+4 n+5 1

m-5

m-4 1 0 2 3 5 8 8 9 7 4 m-3 3 5 8 7 8 7 3 0 4 6 m-2 6 7 8 4 4 1 1 3 5 5 m-1 6 4 4 1 2 4 5 7 7 5 m 1 1 3 8 7 7 6 6 4 2 m+1 3 5 5 6 6 6 4 3 0 3 m+2 5 6 4 4 2 1 1 2 5 6 m+3 7 5 2 1 1 4 7 6 7 6 m+4 2 0 1 3 6 10 8 3 3 1 m+5 3 5 6 6 7 8 2 3 2 3 Table 4

S3 K n-5 n-4 n-3 n-2 n-1 n n+1 n+2 n+3 n+4 n+5 1

m-5

m-4 1 0 1 2 3 4 6 7 7 5 3 m-3 1 3 5 7 6 6 3 1 2 4 5 m-2 3 4 5 5 4 2 0 1 2 3 3 m-1 5 3 1 1 0 2 3 4 5 4 3 m 2 0 2 6 6 5 5 4 3 2 1 m+1 1 2 4 4 4 5 4 3 1 1 3 m+2 3 3 3 3 2 0 0 1 2 4 4 m+3 6 5 2 1 0 1 5 6 5 4 4 m+4 2 1 0 1 3 5 7 6 3 1 1 m+5 1 3 4 5 4 4 3 1 0 2 3

Table 5

S4 k n-5 n-4 n-3 n-2 n-1 n n+1 n+2 n+3 n+4 n+5 1

m-5

m-4 0 1 1 2 2 2 5 5 5 4 m-3 1 3 4 6 4 2 1 1 2 3 m-2 1 2 2 5 2 1 0 0 0 1 m-1 2 0 2 2 0 1 2 2 2 2 m 1 1 5 4 4 3 3 1 1 1 m+1 0 1 3 2 3 3 3 1 0 1 m+2 1 0 2 1 0 1 0 1 1 2 m+3 4 2 1 0 0 2 4 5 2 2 m+4 1 1 0 1 2 2 5 6 1 1 m+5 1 2 3 3 1 1 2 4 0 2

The number of values for gi (X) on each set Si is less than the number of points in area S because each value for gi (X) is calculated only for a pair of values of f(X) and f(X-Qi) which are both in area S. Thus, for example, on set S1 there will be no value of g1(X) for which the k coordinate of P is n-5 because the k coordinate of X-Qi is n-6, which is outside of area S. Then, for each set Si, there is derived a value vi- wher v,- =∑gi-(P) for all values for gi-(P) on set Si-. Where i = 1, 2, 3, 4, the values for vi in the example shown in Tables 1-5 will be v1=167; v2=437; v3=337 and v4=194.

From these values for vi, there will be derived four further values ui, as follows: u1 = max (v1, v3)

u2 = max (v2, v4);

u3 = min (v1, v3);

u4 = min (v2, v4).

In the case of the example in Tables 1-5, u1=337; u2=437; u3=167; u4=194.

These values are then used to derive:

Figure imgf000017_0001
For the example shown in Tables 1-5, e=36.148865.

The derived values for v, u and e are then used to derive a local direction value, d(f, P, S), and a local curvature value, c(f, P, S).

Referring to Figures 2, 3 and 4 of the accompanying DRAWING, according to the 1986 and 1988 publications of Shen, the formulas for calculating the local direction d and curvature c at each point P at the center of a set S in image f are as follows:

d(f,P,S) = sign (v4 -v2) · arctn ((v1-e)/(v3-e)). (1)

Figure imgf000017_0002
In the example shown in Tables 1-5, which example is associated with P(n,m), d = -23.5° and c = 0.0927.

The normal values of direction d are limited between -π/2

(upward in FIG. 2) and +π/2 (downward in FIG. 2). Here d=0 represents the horizontal direction which points toward the right Table 1 and FIG. 2. According to Equation (2), the normal values of curvature c are all between 0 and 1.

Specially, c=0 represents the texture with a plain local curvature, c=1 represents the texture with an abrupt local curvature. Additionally, for each background point or a noisy point P, both of c and d will be set to a special value that c(P)=255 and d(P)=255.

S(P) is a neighborhood of P, it is convex and symmetric with respect to P and with respect to the directions 0º and 45°, respectively. For example, a digital square, disc and octagon with P as center are all neighborhoods of this kind. Each Si is a subset of S by deleting some border points as shown in FIG.4 where '.' is in S and Si, ' * ' is not in Si (i=1,2,3,4).

Now referring to FIGS.5, because there is tremendous complexity for calculating the direction array and curvature array per point, a quick recurrent algorithm with some tables is proposed to reduce the complexity on the basis of three key points:

First of all, because each gradient model |f(P)-f(P-Qi)| will be used as many times as the number of the points whose neighborhood includes both P and P-Qi, four arrays of gradient models at four directions as in FIG. 3 are calculated firstly and signed by gi respectively:

gi-(P) = |f(P) - f(P-Qi)|, where i=1, 2, 3, 4. (3)

Second, there are many common points in the neighborhoods of both P and its adjacent point P-. So vi(P-) can be calculated recurrently from vi (P) by subtracting gf (X) for each X at the left side L(Sf(P) of Si (P) and adding gi(X) for each X at the right side R(Si(P)) of Si(P-), as shown in FIG. 5a where each ' * ' is in S (P) , each ' . ' is in S (P~) and each 'o' is in both sets: Vi(f,P-,Si(P~)) = Vi(f,P,Si.(P)) - Σ(gi(X),X,Si(P)))

+ Σ(gi(X),X,R(Si(P~))). (4)

For example, after v has been calculated for P (n,m), vi may be calculated for P (n+1, m) by subtracting from P (n,m) the values of gi (n-4, 1), and adding the values of gf (n+6, 1), in each case 1 taking on each value from m-5 to m+5.

Furthermore as shown in FIG. 5b, (where ' * ' is in L, '.' is in L^ and 'o' is in both of them, ∑(gi- (X), L^) also can be calculated recurrently:

∑(gi(X),X,L^) =∑(gi(X),X,L) - gi(P') +gi(P"). (5)

Where P' is the top point in L and P" is the bottom point in L^ shown in FIG. 5b.

Third, some look-up tables are used instead of a series of calculations for direction d and curvature c.

To calculate the value of d, a table

Td = {td[0], td[1], ..., td[Md]};

is created, where each term Td[k] in Td is defined as

Td[k] = arctan(k/Md), for k = 0, 1, ...,Md. (6) where Md is a predetermined integer. So the values of terms in Td are all between 0 and π/4. For any group of vi and e with the condition v1 ≤ v3, let

diff_d = Td[int(Md· ((v1-e)/(v3-e))+0.5)]-arctan((v1-e)/(v3-e)). Then according to the continuity of the function arctan(), the value |diff_d| will be very small if Md is large enough. So that by the table Td, when v1≤v3, Equation (1) can be transformed approximately to:

d(P) = sign(v4-v2)·((v1≤v3)?

Td[int(Md·((v1-e)·(v3-e))+0.5]:

(π/2-Td[int(Md-((v3-e)·(v1-e))+0.5)]). (1') The integer Md can be selected large enough to assure sufficient accuracy for y.

To calculate the value of c, another table

Tc = {Tc[o][o], Tc[o][l], ..., Tc[Mc][Mc]}; is created where each term Tc[i][j] in Tc is defined as

Figure imgf000020_0001
where Mc is a predetermined integer. So the values of terms in Tc are all between 0 and 1. For any group of ui, let diff_c = Tc [ int(Mc·u3/u1+0.5)] [ int(Mc·u4/u2+0.5)] -
Figure imgf000020_0002

Then according to the continuity of the function in Equation (2), the value |diff_c| will be very small if Mc is large enough. So that by the table Tc, Equation (2) can be transformed approximately to:

c(P) = Tc[int(Mc*u3/u1+0.5)] [int(Mc*u4/u2+0.5)].

The integer Mc can be selected large enough to assure sufficient accuracy of the formula.

The look-up tables may be provided, e.g. in a non-volatile addressable memory, where each entry corresponds to a respective value of Td or Tc.

If the neighborhood for calculating the local direction and curvature is a square window with 2·r+1 points as both its length and width, let L and K represent the numbers of rows and columns, respectively, of an image array, and the algorithm is as follows:

<1> Calculate gi

g1[l][k]=|f[l][k]-f[l][k-l]|, for 1=0,...,L-1, k=1,...,K-1;

g2[l][k]=lf[l][k]-f[l-l][k-l]|, for l=1,...,L-1, k=1,...,K-1;

g3[l][k]=|f[l][k]-f[l-l][k]|, for l=1,...,L-1, k=0,...,K-1;

g4[l][k]=|f[l][k]-f[l-l][k+l]|, for l=1,...,L-1, k=0,...,K-2; goto <2>;

<2> Accumulating gi of each column k on a vertical line {(0,k), ... (2·r,k)}.

w1[k]=∑(g1[l][k]), 1, (0,1,...,2·r), for k=1, ...,K-1; w2[k]=∑(g2[l][k]), 1, {1,2...,2·r}), for k=1,...,K-1; w3[k]=∑(g3[l][k]), 1, {1,2,...,2·r}), for k=1,...,K-1; w4[k]=∑(g4[l][k]), 1, {(1,2.,,,.2·r}), for k=0,...,K-2; where k is the number of a column, 1 is the number of a row and varies from 0 to 2-r, each wi- [k] is a summation of gi for 1=0,...,2·rr and assigned by k.

goto <3>;

<3> Accumulate wi (i=1, 2, 3, 4) in current window in each gi- to obtain vi-.

v1=∑(w1[j]), j, {1,...,2·r}) ;

v2=∑(w2[j]), j, {1,...,2·r}) ;

v3=∑(w3[j]), j, (0,...,2·r}) ;

v4=∑(w4[j]), j, {0,...,2·r-1}) ;

goto <4>;

<4> Calculate curvature and direction of every point (k, 1) in current window and shifting the window to right,

if(v1+v2+v3+v4<p_v), then

c[l][k]=255; d[l][k]=255;

else c[l][k]=Tc[int(Mc·u3/u1+.5)][int(Mc·u4/u2+.5)];

if (v3==v1), then d[l][k]=si]n(v4-v2)·ττ/4; else {d[l][k] = sign(v4-v2)·((v1-v3)?

Td[Md·(v1-e)/(v3-e)+.5]:

(π/2-Td{Md·)v3-3)·(v1-e)+.5]));

}

}

k=k+1;

if(k≥K-r) goto <5>;

v1=v1+w1[k+r]-w1[k-r];

v2=v2+w2[k+r]-w2[k-r];

v3=v3+w3 [ k+r]-w3[k-r-1];

v4=v4+w4[k+r-l]-w4[k-r-l];

goto <4>; <5> Put the current window at the beginning of the next line and recalculating wi:

l=1+1;

if (l≥L-r) then return;

w1[k]=w1[k]+g1[l+r][k]-g1[l-r-l][k], for k=1,...,K-1; w2[k]=w2[k]+g2[l+r][k]-g2[l-r][k], for k=1,.. .,K--1;

w3[k]=w3[k]+g3[l+r][k]-g3[l-r][k], for k=0,...,K--1;

w4[k]=w4[k]+g4[l+r][k]-g4[l-r][k], for k=0,...,K-2;

p_v is a threshold which is directly proportional to r2, and the condition v1+v2+v3+v4<p_v at a point means that the contrast of gray level in its neighborhood is very low.

Usually direction at those points are ignored, and the curvatures are assigned a special value of 255.

Here each term wi[k] is associated with a column number k. For example, referring again to the values in Tables 1 and 2 above, let r=5, then,

w1[n]=∑(gl[l][n], 1, {m-5,m-4,...,m+5})=17, similarly, w1 [n+1]=26.

Now, an example is provided in Table 6, which is derived from Table 4, for r=2. The values of w3[k] (k=n-5,n- 4,...,n+5) are calculated as in Table 6, while Table 7 shows the values of v3 at the points from line m-3 to line m+3 in Table 4.

Table 6

w3 k n-5 n-4 n-3 n-2 n-1 n n+1 n+2 n+3 n+4 n+5 1 m-3 10 10 12 15 13 14 12 13 16 16 14 m-2 11 10 13 19 16 15 11 10 12 13 12 m-1 11 9 12 16 14 14 12 12 11 10 10 m 11 8 10 14 12 12 12 12 11 11 11 m+1 12 10 11 14 12 11 14 14 11 11 12 m+2 12 11 9 9 9 11 16 16 11 10 12 m+3 12 12 9 10 9 10 15 14 10 11 12

For example,

w3[m+2] [n]=w3[m+l] [n]-g3[m-2] [n]+g3[m+2] [n]=14-2+0=12,

Table 7 v3 k n-3 n-2 n-1 n n+1 n+2 n+3

1

m-3 60 64 66 67 68 71 71

m-2 69 73 74 71 64 61 58

m-1 62 65 68 68 63 59 55

m 55 56 60 62 59 58 57

m+1 59 58 62 65 62 61 62

m+2 50 49 54 61 63 64 65

m+3 52 50 53 58 58 60 62

For example,

v3[m] [n-1]=v3[m] [n-2]-w3[m] [n-4]+w3[m] [n+1]=73-10+11=74.

So that with the above algorithm, only 4 additions and subtractions are needed on average for calculating v3 at each point.

3.3 Removing Background

For most fingerprint images, there always are noisy textures and other features in background. Before extracting the salient features of the fingerprint, it is necessary to segment a region of clear ridges, or valleys, from background and noise, i.e. to decide the position and boundary of the region of clear ridges. The curvature value of a point in background is always very high due to the low contrast or noise, so the clear region can be obtained by cutting off the points with high curvature values.

Referring to FIGS. 6, the method proposed here is for locating an equiangular octagonal region of clear ridges of the fingerprint. The eight edges of the octagon are all straight lines with predetermined equispaced angular

orientations, e.g. 0°, 45°, 90°, 135°, 180°, 225º, 270° and 315º, respectively.

Referring to Fig.6a, the edges of an equiangular octagon are assigned E1, ..., E8 respectively, and the corners are assigned by the coordinates (x1,y1),..., (x8,y8) respectively. So each edge can be represented by an equation of straight line as follows,

E1 : x+y=x1+y1 ; or x+y=x2+y2;

E2: y=y2;

E3 : x-y=x3-y3; or x-y=x4-y4;

E4 : x=x4 ;

E5 : x+y=x5+y5; or x+y=x6+y6;

E6: y=y6 ;

E7: x-y=x7-y7; or x-y=x8-y8;

E8: X=Xg .

Any equiangular octagon is determined by such eight

equations, i.e. by eight parameters {x2, y2, x4, y4, x6, y6, x8, y8)· The octagon is obtained by cutting the image with eight lines according to the curvature array sequentially and described by only 8 bytes of the positions of eight lines.

There are several shapes of octagons shown in FIGS.6. The algorithm to locate eight edges is as follows:

<1> Set the original edges of clear region such that

x4=r; x8=L-r-1; y6=r; y2=K-r1l;

Calculate the average curvature of every row and column in the current clear region, and store them in the two arrays acl[ ] and ack[ ] respectively, i.e. acl[y]=∑(c[y] [x] ,x, {x4,x4+1,...,x8}),

for y=y6, y6+1, .. Y2;

acl[x]=∑(c[y] [x] ,y, (y6,y6+1, ... ,y2}) ,

for x=x4, x4+1, ..., x8·

let n=0;

<2> if (n>p_n1) then goto <5>;

where p_n1 is a predetermined limitation, i.e. the times for cutting the image is not more than p_n1.

Let x0 being the number or coordinate of the row of the current array with the minimum average curvature value, i.e., acl[y0]=min(acl[y6], ..., acl[y2]);

similarly let y0 satisfy,

ack[x0]=min(ack[x4], ..., ack[x8]).

if(ack[x0]/(x8-x4+1) ≥ acl[y0]/(y2-y6+1)) then

goto <3>;

else goto <4>;

<3> Cut the area by horizontal lines, i.e. determine edges E2 and E6. Let

P_c_y=p_c_p-Σ(acl[y], y, {y6, ...,y2})/(y2-y6+1);

l=max{y | (acl[y] > p_c_y) & (y≤y0) & (y≥y6)};

l2=min{y | (acl[y] > p_c_y) & (y≥y0) & (y≤y2)};

i.e. 1 is the largest row number satisfying l≤y0, l≥y6 and acl [1] >p_c_y; and l2 is the smallest row number satisfying l>y0, l2≤y2 and acl [1] >p_c_y. Then recalculate y2, y6 and ack[ ] in the current clear region as follows,

Y2=l2' Y6=l;

ack[x]=∑(c[y][x], y, {y6, y6+1,...,y2});

for x=x4, x4+1, ...,x8.

let n=n+1;

goto <2>;

where p_c_p is a predetermined parameter for calculating the parameters p_c_x and p_c_y.

<4> Cut area by vertical lines, i.e. determine edges E4 and

E8. Let

P_c_x=p_c_p-Σ(ack[i], i, {x4, ... ,x8})/ (x8-x4+1);

k1=max{x | ack[x]>p_c_x & x<x0 & x≥x4};

k2=min{x | ack[x]>p_c_x & x≥x0 & y<x8}; i.e. k, is the largest row number satisfying k1≤x0, k1≥x4 and ack[k1]>p_c_x; and k2 is the smallest row number satisfying k2≥x0, k2≤x8 and ack[k2]>p_c_x. Then recalculate x8, x4 and ack[ ] in the current clear region as follows,

x8=k2; x4=k1;

acl[y]=∑(c[y][x], x, {x4,x4+1,...,x8}),

for y=y6, y6+l,...,u2;

let n=n+1;

goto <2>;

<5> Cut area by hypotenuse lines, i.e. determine the edges E1, E3, E5 and E7 of the octagon enclosing the clear region or determine the numbers x2, y4, x6 and y8. Let

p_c_z=(p_c_x + p_c_y)/2;

thent the four numbers can be calculated as following,

x2=min{z | (ac3[z]>p_c_z) & (z≤x8) & (z>(x4+x8)/2)};

where for z=x8, x8-1, ..., (x4+x8)/2,

ac3[z] =∑(c[l][k], (k,l), A2(z));

where A2(z) = {(y2,z), (y2-l,z+1), ..., (y2-x8+z,x8)}.

x6 = max{z I (ac3[z]>p_c_z) & (z≥x4) & (z≤(x4+x8)/2)}; where for z=x4, x4+1, ..., (x4+x8)/2,

ac3[z]=∑(c[l][k], (k,l), A6(z));

where A6(z) = {(y6,z), (y6+1,z-1), ..., (y6-x4+z,x4)}.

y4 = min(z | (ac3 [z]>p_c_z) & (z<y2) & (z>2+y6)/2)};

where for z=y2, 22-1, ..., (y2+y6)/2,

ac3[z]=∑(c[l][k], (k,l), A4(z));

where A4(z) = { (y2,z), (y2-l,z-1), ..., (y6+x4-z,x4)}.

y8=max[z | (ac3[z]>p_c_z) & (Z>y6) & (z<(y2+y6)/2)};

where for z=y6, y6+1, ..., (y2+y6)/2,

ac3[z] =∑(c[l][k], (k,l), A8(z));

and A8(z)={(y6,z), (y6+l,z+l), ..., (y6+x8-z,x8)}.

<7> After the octagon is obtained, the curvatures of points in the background are set to 255 for distinguishing them from the points in clear area. 3.4 Locating Singularities And Analyzing Trends

Now referring to FIGS.7, there are three types of core in a fingerprint, according to the structure of the ridges around them, named 'o' (FIG. 7a), 'n' (FIG. 7b), and 'u' (FIG. 7c), respectively. An 'o' core may appear in a whorl, an 'n' may appear in a whorl, a double loop or a loop, while a 'u' may appear in a whorl, a double loop or a nodding loop. However, any core or delta is a point such that the

directions of ridges around it are very inconsistent, so its curvature is very high and may be higher than the curvatures of its neighboring points. Visually, there always are many bright points on the curvature array of a fingerprint; some of them indicate the position of a core or delta, while others of them indicate a scar, fold or noise.

According to the publications of Shen in 1986, a point on a digital image is called a singularity if it has a maximum curvature value that is greater than a threshold among its 8 neighboring points which form the corners and line midpoints of a square in which the point is centered.

A point P=(k,l) is called a singularity if its curvature is not less than its 8 neighboring points' and not less than a predetermined threshold p_cl, i.e.

1. (c(P) ≥ c(P+Qi)) & (c(P) ≥ c(P-Qi)), (i=1,2,3,4); 2. c(P) ≥ p_cl;

There will be some singularities appearing in the region near a core or delta. But usually there will be some

singularities appearing in the region near a scar, fold or noise, too. For the purpose of recognition of a core or a delta among singularities, analyzing the structure around a singularity is necessary.

Now referring to FIGS. 8, the ridge flow is different around a core, a delta or an ordinary point. The distinction can be described easily by using the concept of ridge trend. A ridge trend of a point is defined as the direction of ridges that are near the point and run off from the point. There are three ridge trends for a delta, two trends for an 'a' core or ordinary point, one trend for a 'n' or 'u' core, and no ridge trend for an 'o' core. The number of ridge trends of a point is called the forkedness of the point. To find the ridge trends of a point P, a series of digital circles {Os} with Pc as the center and various radii s are used. For every point Ps on a circle Os, the difference between the local direction at Ps and the direction of vector PcPs is calculated and stored in an array dd[ ] . The

difference when PcPs extends substantially in the direction of a ridge trend will be very small, as shown by the curve minima FIGS.9, so the ridge trends will be decided by finding all minimums in dd[ ]. FIG. 9a shows the pattern of array dd[ ] around an 'o' core, FIG. 9b around an 'n' core, FIG. 9c around an 'a' core, FIG. 9d around a delta. Fig. 8e

represents a noise point or a scar. Fig. 9e shows the pattern of dd[ ] around the point in Fig. 8e.

Fourier transform and reverse transform are used on dd[ ] for reducing the effect of noise. The forkedness can be decided by the power spectrum, while the trends can be found on filtered dd[ ].

The method for determining the forkedness of a point will be described with reference to the power spectrum of a Fourier transform FT: ω[0], ..., ω[j]; where j is the order of Fourier the transform. Firstly, if ω[0] is the maximum of all ω[j] and ω[0] is not smaller than a predetermined

threshold p_ω0, then it is an 'o' core. Secondly, if ω[1] is the maximum of all ω[j] except ω[0], and ω[1]≥p_w1, then it is a 'n' or 'u' core. Else if ω[2] is the maximum of all ω[j] except ω[0] and ω[2]>p_ω2, then it is an 'a' core. Else if ω[3] is the maximum except ω[0] and ω[3]>p_ω3, then it is a delta point. Here p_ω0, p_ω1, p_ω2 and p_ω3 are all

predetermined parameters.

In the case of Fig. 8e, ω[2] to be the maximum, but it is not an 'a' core. To determine whether it is an 'a' core is depended on its trends. If the difference of the two trends is very large, for example, it is great than 2π/3, then it would not be an 'a' core and would be ignored.

However it is notable that both the trend directions and forkedness at a point are due to the radius of the analysis circle. The algorithm for analyzing the trends of a point P is as follows:

<1> Let s = Smin.

Where Smin is the minimum radius of digital circles for trend analysis and Smax is the maximum.

<2> Calculate dd[ ].

if (S > Smax) then reject and return;

else {for every point Psi on digital circle Os do:

{dd[i] = |d(Psi) - dv(Psi-P)| % π;

if (dd[i] > π/2 then dd[i] = π-dd[i];} nz = #{X I (X on Os) & (c(X) > p_c2)}; if (nz > p_n2), then {s = s+1; goto <2>} else goto <3>;}

i.e. if the number of points, Psi , on Os with c(Psi)>p_c2 is more than p_n2, then increase the radii s and recalculate dd[ ].

<3> Derive Fourier transform on array dd[ ].

a[j] = Σ (dd[i]·cos(2·π·i·j/m), i, {0,...,m-1})/m;

b[j] = ∑ (dd[i]·sin(2·π·i·j/m), i, {0,...,m-1})/m;

for j = 0, ..., n;

where m=#Os.

The power spectrum is

S=Smin-1; ω[j] = (a[j])2+(b[j])2; for j=0, ..., n.

where n is the order of the Fourier transform.

if (ω[0] > p_w0) {P is an 'o' core, return;}

else {let k satisfying,

ω[k] = max{ω[1], ω[2], ω[3]};

if (ω[k] < p_ωk) then goto <2>;

else if k is equal to 1 then

{calculating the trend direction,

td = arctan(b[1]/a[1]) (-π < td < π);

if (td > 0) then P is an 'n' core;

else P is a 'u' core;

return;

}

else make the reverse Fourier transform:

{dd[i] = ∑(a[j]·cos(2·π·i·j/m)- b[j] ·sin(2·π·i·j/m), j, {0,1,...,n});

}

}

where p_ω-(j=0, ...,3) are predetermined parameters.

<4> Finding each minimum value dd[ij ] in dd[ ] [ , i. e. for each dd[ij] that satisfies

dd[ij]<dd[(ij-1)%m] & dd[ij]<dd[(ij+1)%m & dd[ij]<p_dd, for j=1, ...,l.

if (1 is not equal to k) then goto <2>;

else {the forkedness at P is k;

the trends at P are dv(Psij.-P), j=1,...,k.

}

return;

where p_dd is a predetermined threshold for finding minimum points. dd[ (ij-1)%m] and dd[(ij+1)%m] are the values of dd[ ] at neighboring points of Psij on the circle.

In the equations presented above, n is a constant that is much smaller than m, the parameters Smfn and Smaχ are predetermined; p_c, p_n2 and ωj (j=0,...3) are thresholds. More specifically, p__n2 is selected to eliminate curvature values which are so large in number that the associated direction value is unreliable. The threshold p_c is selected to obtain points with a suitably high curvature value.

Accordmg to preferred embodiments of the invention, Smin may be equal to 5 and Smaχ may be equal to 20.

The procedure for finding all cores and deltas of a fingerprint except the 'a' core is as follows:

<1> All singularities i.e. maximums of curvature array in the clear region, are located;

<2> Every singularity is analyzed for finding trends by the above algorithm.

<3> Select one point in every set composed of the same kind of cores or deltas located together as the representative. The criterion for selecting 'o' core is ω[0], for 'n' or 'u' core is ω[l], for delta is ω[3].

In section 3.7, a method will be provided for finding the 'a' core. 3.5 Line Tracing

A digital curve in a fingerprint is called a contour line if it is in keeping with the local ridge direction at every point on it. Usually, a contour line can be obtained by starting from a point on the fingerprint with a trend of the point as the initial direction and extending or tracing progressively according to the direction array. Especially, if the initial point is a core or delta of the fingerprint, then the extended trace is called a shape line of the

fingerprint.

In the method provided below for accurately tracing contour lines, every tracing step moves by just one point; meanwhile the errors of each coordinate and direction are all accumulated for correcting the tracing; furthermore the tracing will stop at the right place. Where ko and 10 are the column and row with minimum curvature in the clear region produced as described previously, cl, ck and cd are the current row, column and direction values, the algorithm for tracing a contour line starting from point (k0,10) with initial direction do is as follows:

dl, dk, dd are the accumulating differences of

coordinates and direction in tracing respectively; tll[ ], tlk[ ], tld[ ] are the arrays of coordinate and direction of referring points in tracing, di is a parameter, ac is the average curvatures in the segment of a line.

<1> Initializing the variables,

dl=dk=dd=k0; ck=k0; cd=d0;

i=1; ac=0.

tll[0]=l0; tlk[0]=k0; td[0]=d0;

goto<2>.

<2> Stepping to next point,

while (cd-tld[i-1] < -π) cd = cd+π;

while (cd-tld[i-1] > π) cd = cd-π;

if (|cd-d0| > p_d3 then goto <3>,

where p_d3 is the limitation of the difference between

current direction cd and initial direction do.

ac = ac+c[cl] [ck]; if (i ≥ p_l) then

{ac=ac-c[tll[i-di] [tlk[i-di]];

if (ac > p_ac·p_l) then goto <3>;

where p_ac is a threshold.

dd = cd-tld[i-l];

if (|dd| > p_d2) then goto <3>;

where p_d2 is the limitation of accumulative difference of direction.

if (dd > p_dl) then

{dd > p_dl;

cd = cd-dd

}

else if (dd < -p_dl) then

{dd = dd+p_dl;

cd = cd-dd;

)

else dd = 0;

where p_dl means the maximum value for correcting the direction. The increase of cl and ck depend on sin(cd) and cos(cd):

if (|sin(cd)| < |cos(cd)|) then

{cl = cl+sign(cos(cd)),

dk = dk+tan(cd);

if ([dk| > 1) then

{ck = ck+sign(dk),

dk = dk-sign(dk),

}

}

else then

{ck = ck+sign(sin(cd)),

dl = dl+ctan(cd);

if (|dl[ > 1) then

{cl = cl+sign(dl),

dl = dl-sign(dl),

}

} if point (ck, cl) is out of the clear region, then goto <3>;

else, saving the current coordinates and direction,

{tll[i]=cl, tlk[i]=ck, tld[i]=cd, i-i+i; cd = d[cl] [ck];

goto <2>;

)

<3> Determining the length of the traced line,

i = i-1,

if (c[tll[i]] [tlk[i]] < p_ac) then goto <4>,

else {i = i-l;

if (i > 0) repeat <3>,

else goto <4>;

}

<4> the traced line is {tll[j], tlk[j], tld[j], j=0,l,...,i;

return,

where di, p_ac, P-d1, p_d2 and p_d3 are all predetermined parameters.

A contour tracing will be stopped if one or more of following conditions is true:

(1) Current point is out of the clear region.

(2) Average curvature of last several points is too high.

(3) Rotated angle from initial direction is too large. A similar algorithm is used to trace any normal line of a fingerprint; the normal line is defined as a digital curve on the image that is perpendicular to the local ridge

direction everywhere. The algorithm can be obtained from the one above by replacing the assignment cd=d[cl] [ck] in step <2> with cd=d[cl] [ck]+π/2 or cd=d[cl] [ck]-π/2. The normal lines are used in a novel method described in section 3.6, below, for macroscopically locating the center of a

fingerprint.

Line tracing is a basic algorithm in this fingerprint processing system. It is important in locating the

coordinate axes, extracting the shape features and detecting minutiae etc. that will be described in following sections. The line tracing operation described in this Section is used to trace shape lines from the center of a delta.

3.6 Macroscopic Method For Locating Coordinate Axes

The type or shape of a fingerprint, especially if it is characterized by a small whorl or loop, may be ambiguous due to distortion and noise produced when the impression is taken. Some impressions of a small whorl look like a loop or tent arch, while some impressions of a small loop look like a plain arch. Therefore, there must be some consistency in the rules for deciding the coordinate axes, i.e. the center and central orientation, of various types of fingerprints.

In section 3.4, there was described a method for

locating cores and deltas, except the 'a' core, of

fingerprints by analyzing all singularities of the

fingerprint. However, to find an 'a' core and to determine the center and central orientation of a plain arch that are consistent with other similar fingerprints, a method which involves analyzing the macroscopic structure of fingerprint ridges is needed.

Referring to FIGS.10 again, many fingerprint types or shapes, for example whorls, double loops, loops, plain arch, tent arch etc., are shown. In fact, the principal

distinctions among them are always at the central parts of the fingerprints, while the peripheries of fingerprints are all very similar. Generally, at the central area of any fingerprint, the ridges at the upper part will form a vault, the ridges at left and right sides will run off from the central part, and the ridges at the lower part will be always plain, as shown in FIG.11.

Referring to FIG.12, any core of a fingerprint, except a 'u' core, is always at the most curved region of the ridges below the vault formed by upper ridges. Generally for any fingerprint except a nodding loop, the normal lines will, starting from the upper part and going down, all concentrate together at a central region where there are most curved ridges of the fingerprint, i.e. where there is an 'o' core or an 'n' core for a whorl or loop, or an 'a' core for a plain arch. Some of the normal lines may end at the central region, while others may obviously curve at the central region, so the core can be located by analyzing the singularities near each end point and the most curved point on the noirmal lines. This method is very important for a plain arch, because it is usually difficult to determine, or locate, the center of a plain arch. The algorithm for locating macroscopically any core except a 'u' core of a fingerprint is:

Let l1 be the upper line border, l2 the lower line border, k1 the left column border and k2 the right border. <1> Initializing, let the initial start point (k0, l0) for tracing be

l0 = l1; k0 = (k2-k1)/2;

<2> Selecting current start point (k0, l0),

l0 = l0+1;

if (c[l0][k0] > p_c3 then goto <2>;

dk = p_k-sin(d[l0] [k0]);

if (|dk| > p_d4) then {k0 = k0-dk; goto <2>;}

else, the start point is (k0, l0);

d0 = d[l0] [k0];

goto <3>.

where p_c3 and p_k are parameters.

<3> Finding a vault line.

A vault line can be considered as a combination of two contour lines, i.e. a right contour line and a left contour line that both start from the middle of the print. So firstly the two contour lines should be traced. Starting from (l0,k0), two contour lines can be obtained by tracing with directions do and d0+π respectively. These two contour lines are then combined into a vault line.

If the vault is not perfect, i.e. if its length is too short or its chord is too slanting, or the curvature of the vault is too high, then goto <2>.

else let {vl[j], vk[j], vd[j], j=0,...,lv} be the vault line; where vl[j], vk[j], vd[j] are the y coordinate, the x coordinate and the local direction, respectively, at the jth point on the line and lv is the length of the vault line. let i=0; goto <4>;

<4> Tracing a normal line by the previous algorithm with starting point (k0,l0) and direction do that

k0 = vk[i]; 10 = vl[i] ; do = vd[i]+π/2;

if (i ≤ lv) then

{i = i+p_g;

goto <4>,

}

else goto <5>;

where p_d4 is a threshold for limiting the local direction of starting point, p_g is the gap between two normal lines at starting points.

<5> Determine the areas of concentration of the normal lines (see FIG. 12), then analyze the singularities in the area to find the 'o', 'n' and 'a' core or others by the forkedness with the algorithm described in 3.4.

In the above algorithm, p_c is a threshold, p_k is a constant. In the trend analyzing of above points, if the forkedness is 0 or 1, then the point is an 'o' or 'n' core, while if the number is 2, then it is an 'a' core.

By singularity analysis, there may be more than one 'a' core in the central region of the fingerprint. The criterion for selecting the most representative one among them is the angle difference dd between the two trends (d1 and d2) of a singularity, i.e.

dd = min{|d2-d1|, 2·π-|d2-d1|}.

For example, if d1=π/4 and d2=3π/4, then dd=π/2; if dl=2π and d2=3π/4, then dd=3π/4.

The 'a' core which has the smallest angle difference will be selected as the most representative one.

Referring to FIG.13, there are two trends of an 'a' core, one is towards left while the other is towards the right. The main trend of an 'a' core is defined as the trend at the core side where the gap between two adjacent contour lines is wider than at the other side. By this rule, the central orientation of a plain arch is consistent with loops.

After the trend analyzing for each singularity and locating the 'a' core macroscopically, the center and centra orientation of a fingerprint can be decided as follows sequentially:

<1> If there is an 'o' core in the central region, then the pattern must be a whorl, the position of the center is the center of the core, the central orientation is ττ/2 , i.e. down forward, and 0° is horizontal to the right.

<2> If, in the central region, there is an 'n' core and a •u* core, the pattern is a whorl; if there is an 'n' core, no 'u' core and more than one delta, the pattern is a whorl; and if there is an 'n' core, no 'u' core and not more than one delta, the pattern is a loop. The position of the center is the same as the center of the core, and the central

orientation is the trend of the core.

<3> Else if there is a 'u' core, then the pattern is a nodding loop, the position of the center is the same as the center of the core, and the central orientation is the trend of the core.

<4> Else if there is an 'a' core in the central region, then the pattern is a plain arch, the position of the center is the same as the center of the core, and the central

orientation is the main trend of the core.

The center and central orientation of a whorl, a loop and an arch, decided by the macroscopic method are shown in FIGS. 19, 20 and 21, respectively. These FIGS, depict tracing lines which have been generated to be perpendicular to the local ridge directions in the vicinity of the center of the fingerprint pattern.

3.7 Shape Features And Classification

All of the shape lines of a fingerprint can be obtained by the above algorithm starting from the center of delta of the fingerprint with each trend as the initial direction.

Line tracing is performed as described in Section 3.5, above. Various shape lines of fingerprints, much as shown in FIGS.10, can serve to describe the shapes of fingerprints accurately. According to the structural relations of the shape lines, the fingerprints can be classified into 18 types each with a respective topological structure. A finer classification may be based on the shape features defined below.

FIG. 14 illustrates a technique for extracting the shape features of a left loop. Where C is the center, Po is the delta, sl1, sl2 and sl3 are shape lines starting from Po .

P1...P7 are points selected on sl2 or sl3. The algorithm for extracting the shape features of a loop is as follows:

<1> Determine center C=(kc,lc) by the algorithm in Section 3.6;

<2> Determine delta centered at Po =(ko,lo) by the algorithm in Section 3.4;

<3> Trace three shape lines si,, sl2 and sl3 starting from Po with three trends of the delta as initial directions

respectively.

<4> Selecting seven points Pi=(ki,li-), (i=1,...,7) on sl2 and sl3, such that

dv(Pi-C) = do+iπ/4, where do=dv(Po -C). Because P6 is always very far from C or beyond the border of the image, no feature will be defined by referring to P6.

<5> Calculate the distances between C and Pi, i.e.

Figure imgf000038_0001

<6> Ridge counting between C and Po ,

Let {(x0,y0), ..., (xn, yn)} be the straight line from C to Po , where (x0,y0) =(kc,lc), ·(xn,yn)=(k0,l0); and let

g[i] = f[yi][xi], (i=0,...,n);

then the ridge counting is defined as, rc(C,Po ) = #(g[i] I g[i-1]>g[i]<g[i+1], (i=1,...n-1)). <7> Total of 18 shape features are defined, they are 7 distances IPi-C], one ridge count rc(C,Po ), and 10 direction values referring to do including the central orientation, 3 trends of Po and 6 local directions d(Pi), (i=1,...5,7).

The 18 shape features of a right loop are extracted in a manner similar to the left loop. A whorl or double loop can be considered as composed of two loops, i.e. one left loop formed by the left delta and center as well as one right loop formed by the right delta and center, so it has both 18 left loop shape features and 18 right loop shape features, i.e. 36 shape features in all. For a tent arch, the features

referring to point P7 are not extracted because it may appear at either left or right. However, for the purpose of

consistency between whorls and other shapes, a total of 36 shape features are supposed for any fingerprint. If some of the 36 shape features can not be obtained due to noise, imperfections in, or the shape of, the fingerprint, these features are each assigned a value of -1. There is no

meaningful shape feature for a plain arch, in other words all shape features of a plain arch are equal to -1.

According to the structural relations of position and surrounding etc. among shape lines, fingerprints are

classified into 18 classes with respectively different topological structure as shown in FIGS.10 which show 11 whorls, 4 loops, one accident, one tent arch and one plain arch. Every class of whorl, loop and tent arch can be further classified according to the shape features. 3.8 Global features and Global difference

The shape features for describing the pattern of a fingerprint are all based on both the center and delta. So they may be affected by the imperfections of fingerprint or by the noise which distorts the center, delta or shape lines. In particular, there is no shape feature defined for plain arch, so that for the purpose of practicality and consistency of fingerprint system, the features for describing the pattern of a partial, noisy or plain arch fingerprint should be considered.

One of the most important parts in this invention is a method for defining and extracting the global features of various fingerprints to represent their pattern naturally and consistently by referring to the local ridge directions.

Generally, the global features of a fingerprint provide a basis method for representing the ridge direction array of the fingerprint. These features must be obtainable for any kind of fingerprint, and be effective in pattern matching of fingerprints.

The simplest method for defining global features is to select some points on the direction array and take the local ridge directions at each point as features. So if the amount of points is large enough, then the accuracy of repression will be fine enough. Especially, as shown in FIGS. 15, the points can be selected to form a circular, or polar, array or a rectangular array.

First method, the points can be selected on several circles with a common center. Referring to Fig. 15A,

C=(ko,lo) is the center of a fingerprint, d0 is the central orientation, or direction. There are n circles Oi

(i=0,...,n-1), with a common center C and different radii ri (i=0,...,n-1). There are m, selected points Pij=(kij,lij)

(j=0,...,mi-1) on Oj segmenting the circle equally

(i=0,...-n-1). The global features gf of a fingerprint are defined as a set:

gf=[gff((kij,lij)) I j=0,...,mi-1; i=l,...,n-l];

where

kij=int(ko+ri·cos(do+j·2·π/mi)+0.5);

lij=int(10+ri·sin(do+j·2·π/mi)+0.5);

255, if c(P) > p_c4;

gff(P) = {

int(d(P) ·p_π/π+0.5) %p_π, elsewhere.

i.e., gf(P) is equal to 255 when the ridge directions around

P are not clear or the curvature c(P) is greater than a predetermined threshold p_c4; else where gff(P) represents the direction d(P) in one byte with a value between 0 and 251; parameter p_π is selected to transform the range of angular value from [0,π] to [0,p_π] for storing it in one byte and reserving enough accuracy.

In an embodiment, the values of parameters are n=9 and mi=64 (i=1,...,n), so there are total 9-64=576 points

selected, and the global features of a fingerprint are composed by 576 bytes. If the number of points selected on each circle 0i is equal to others, then gf can be simply stored in an array of bytes:

gf={gf [i] [j] ( =gff (Pij) )|j=0,...,m-1;i=0,...,n-1;}

For example, the global features of an arch, loop and whorl are shown in FIGS. 22, 23 and 24, respectively, where the center of each pattern is at the common center of the concentric circles and the central orientation of each pattern is represented by a short line extending from the center. There are 9 circles in each image, and each circle is composed of 32 points (for showing more clear than 64 points). For each point selected, if it is not in background and its curvature is not high, then the local direction is represented by a line centered on the selected point; the remaining selected points are each represented by a dot, as is particularly apparent at the bottom and lower portions of the left-hand and right-hand edges of FIG. 22.

The difference between two shapes of fingerprints always will reflect on their direction arrays. So that it also would reflect on the global features which represent the direction arrays. For this purpose, an important

measurement, called global difference between two sets of global features, is necessary.

In the case of Equation (9), the global difference gd1 between two fingerprints by their global features gf1 and gf2 is defined as:

gd1(gf1, gf2) = min{∑(f_dg(d1-gfl[i][j],

d2-gf2[i][(j+r)%m])/#M1(r), (i,j), M1(r)),

r=0,...,m-1, #Ml(r)>0}. where d1 and d2 are central orientations of two fingerprints respectively, set M1(r) means

M1(r)={(i,j) I gfl[i][j]<p_π & gf2[i] [(j+r)%m]<p_π); function f_dg() means

f_dg(x,y) = f_i(min( |x-y| %p_π, p_π- ( |x-y| %p_π))); where f_i(z) is an increase function of z. In the

embodiment, f_i(z) = z2. So the global difference between two fingerprints is calculated by matching their global features with various radii r to find the minimum difference.

Second method, the points can be selected on a grid with n rows and m columns, as in FIG. 15b, so the global features can be stored in an array gf[ ][ ] such that:

gf[i] [j]=d[yoo+i-dy] [xoo+j -dx];

i=0,...,n-1; j=0,...,m-1.

where xoo and yoo are the coordinates of the left upper corner point on the array, dy and dx are increases of row and column respectively.

The global difference gd2 between two fingerprints by their global features gfl and gf2 is defined as:

gd2(gfl,gf2)=min(∑(f_dg(d1-gfl[i]p[j],

d2-gf2[l][k]/#M2(l,k), (i,j), M2(l,k)),

Complete 1=0,...,n-l, k=0,...,m-1.

The global difference can be used for finely classifying fingerprints in a database, or selecting similar fingerprints in a database to reduce the difficulty of minutia matching during a searching procedure.

3.9 Detecting Minutia From Gray Level Image

Minutiae are very important traditional features of a fingerprint, and are used in final verification of identity of two fingerprints. Usually minutiae are described with respect to the pattern of fingerprint ridges. There are many types of minutiae on a fingerprint, for example as shown in FIGS. 16, endings (a), bifurcations (b), islands (c), eyes (d), bridges (e), etc. In brief, minutiae are singularities of ridges.

However ridges always coexist with valleys on a

fingerprint, and each feature or minutia of ridges always corresponds to a change in valleys, so that minutiae can be described in terms of valleys, too. In general, an ending of a ridge is a bifurcation of valleys while a bifurcation of ridges is an ending of a valley, an island of a ridge is an eye of a valley and an eye of a ridge is an island of a valley. Referring to FIGS. 17, the exceptions may appear at cores and deltas. The description of minutiae that are just at a core or delta in terms of ridges is different from the description in terms of valleys. However, the descriptions of minutiae should be consistent by being all in terms of valleys or all in terms of ridges.

For automatic detection of minutiae, the novel method provided here is based upon tracing the valleys rather than the ordinary method which is based upon binarizing, thinning and smoothing the ridges. Generally in a fingerprint image, the quality of valleys is much better than that of ridges, primarily because of the following reasons: firstly, there are no sweat glands in valleys; secondly, the widths of valleys are more even than those of ridges; and thirdly, the gray levels in valleys are more even than in ridges. Although there will be incipient ridges in the valleys of some

fingerprints that may affect valley tracing, all ridges of every fingerprint have sweat glands that may affect ridge tracing. So in general, the result of valley tracing should be much better than ridge tracing.

The algorithm for tracing a valley with an initial point (k0,l0) and direction do is similar to tracing a line, except that it uses a key technique that keeps the step points in the valley.

Let f[l][k] be an image array. Its element f[l][k] equals the gray scale value of a point (k,l), it will be set to -1 after it has been traced, ag is the summation of gray scale values of the last p_l points in a tracing line. Array tlg[ ] is used to store gray scale values of traced points. The definitions of other variables are the same as for the algorithm of line tracing in Section 3.5.

<1> Initializing the variables. dl=dk=dd=0 ;

cl=l0; ck=k0; cd=d0;

i=1; ac=0.

tll[0]=l0; tlk[0]=k0; tld[0]=d0;

tlg[0]=f[l0][k0];

goto <2>.

<2> Step to the next point.

Accumulate curvatures of every point in the valley, ac=ac+c[cl][ck];

if (i>p_l) then

{ac=ac-c[tll[i-p_l]][tlkk[i-p_l]];

if (ac > p_ac·p_l) then goto <3>;

}

Accumulate gray scale values of every point in the valley.

ag=ag+f[cl][ck];

if (i > p_l) then

{ag=ag-tlg[i-p_l];

if (ag < p_ag·p_)) then goto <3>;

}

if (cd-tld[i-1] ≤ -π) then cd = cd+π;

if (cd-tld[i-1] ≥ π) then cd = cd-π;

if (|cd-do| > p_d3) then goto <3>,

where p_d3 is the limitation for changing current direction cd per step in tracing.

ac=ac+c[cl][ck];

if (i > p_l) then

{ac=ac-c[tll[i-p_l]][tlk[i-p_l]];

if (ac > p_ac·p_l) then goto <3>;

where p_ac is a threshold.

}

dd=cd-tld[i-l];

if (|dd[ > p_d2) then goto <3>;

where p_d2 is the limitation of accumulative difference of direction.

if (dd > p_dl) then

{dd = dd-p_d1; cd=cd-dd

)

else if (dd < -p_d1) then

{dd=dd+p_d1 ;

cd=cd-dd;

}

else dd=0;

where p_d1 means the maximum value for correcting the direction. The increase of cl and ck depend on sin(cd) and cos(cd);

if (|sin(cd)| < |cos(cd)|) then

{cl=cl+sign(cos(cd)),

dk=dk+tan)cd);

if (|dk| ≥ 1) then

{ck=ck+sign(dk).

dk=dk-sign)dk),

}

}

if any point (x,y) of {(ck,cl), (kl,ll), (kr,lr)} is out of the clear region or f[y][x]<0, then goto <3>;

else save current coordinate, direction and gray level values;

{tll[i]=cl; tlk[i]=ck; tld[i]=cd; i=i+1; f[cl][ck]=-1;

cd=d[cl] [ck];

dd=dd+(f[ll][kl]-f[lr][kr])·p_ga;

goto <2>;

}

where (kl,ll) and (kr,lr) shown in FIGS. 18 are called the left point and right point of current point (ck,cl)

respectively. Both of them are the 4-neighboring points of current point and 8-neighboring points of the previous point (tlk[i],tll[i]). p_ga is a predetermined parameter of modifying direction by difference of gray scale values.

<3> Determine the length of traced valley,

i=i-l; if ((c[tll[i]][tlk[i]]<p_ac) & tlg[i]>ρ_ag)) then goto

<4>,

else {f[tll[i]][tlk[i]]=tlg[i];

i=i-1;

if (i>0) then repeat <3>,

else goto <4>.

}

<4> The traced valley is {tll[j], tlk[j], tld[j], (j=0,l,...,i)],

return;

Where p_l is a constant, p_ac, p_d1 and p_d2 are all thresholds.

Referring to FIGS.18, both the left point (kl,ll) and the right point (kr,lr) are 4-neighboring for current point (ck,cl) ans 8-neighboring for previous point

(tlk[i-1] ,tll[i-1] ) . (ck,cl) may be replaced by its

4-neighboring point (kl,ll) or (kl,ll) according to their gray level. This algorithm is similar to that for line tracing with gray level as an additional factor.

In detail, the tracing line will firstly step from the prior point to the current point temporally, then the gray scale of two 4_neighboring points, i.e. (kl,ll) and (kr,lr), of the current point are considered. A point is selected to be a valley point if its gray scale value is higher than or equal to the other points in the neighborhood.

A valley tracing will be stopped if one or more of following conditions is true:

(1) The current point is out of the clear region.

(2) The average curvature ac of the last p_l points in the tracing is very high, i.e.greater than p_ac.

(3) Any previous valley trace is touched.

(4) The average gray level ag of the last p_l points in the tracing is very low, i.e. less than p_ag.

The algorithm for detecting minutiae from a gray level image by valley tracing is as follows:

<1> Let gp=gap2; <2> The start point P for valley tracing should satisfy each of following conditions:

(1) P is a maximum point in a 3×3 neighborhood on gray scale image F and f(P)>p_f.

A maximum point in a gray scale image means it is one whose gray scale value is not less than that of each of its 8 neighboring points.

(2) P is in the clear region of ridges and c(P)<p_c. This means the curvature at P is smaller than p_c.

(3) There is no traced line at directions c(P)+π/2 and c(P)-π/2 within distance gp.

For every such point P, the valley is traced in two initial directions c(P) and c(P)+π, respectively. The

minutiae are detected as a result of the conditions for stopping a trace: If the trace is stopped due to condition

(3), then a valley bifurcation is found; While if the trace is stopped due to condition (4), then a valley ending has been found.

<3> Connect any two terminals of traces (a terminal being the start or end of a trace) if:

(1) The two last directions of the traces are opposite to one another;

(2) The positions of two terminals of traces are very close;

(3) The average gray level between them is higher than p_ag.

<4> gp=gp-1; if gp>gapl, then goto <2>;

Where parameters p_f, t, gapl and gap2 are all

experimentally determined constants. Each minutia found is described by three attributes, i.e. x, y and θ. The

coordinates x and y are same as the position of a trace terminal, the direction θ is equal to the one of d[y] [x] and d[y] [x]+π which is closer to the last direction in the trace.

An example of extracting minutiae by tracing valleys on a gray level of a fingerprint is shown in FIG. 25, where the fingerprint is the the same as in FIGS. 21 and 22 and the gray level is reversed. 3.10 Quality Level And Vector

The features of a fingerprint may be effected by many factors, for example noise level, the effective area of the clear region, the position of the center, the number of minutiae, and so on. Sometimes the factors are due to the quality of the finger itself, while at other times they are due to the impression or input device. A quality level q_l should be provided after image processing in order to make possible an automated or operator controlled decision as to whether to accept, reject, or reinput the fingerprint image, or if possible to take a new fingerprint impression. The quality level can be described in detail by a quality vector:

q_v=(q_n, q_a, q_p, q_m, q_h);

where each factor is calculated as follows:

<1> The noise level q_n refers to the average curvature in the whole clear region, i.e.

q_n=f__q_n(a_c);

wh;re

a_c=Σ(c(X), X, {X|c(X)<l})/#{X|c(X)<l};

f_g_n(z) is an increase function for z in the range 0 to 1.

In the embodiment g__n is defined as,

0, when a_c<c1;

1, when a_c>c1 & a_c≤c2;

q-n 2, when a__c>c2 & a_c<c3;

3, when a_c>c3.

Where c1, c2, and c3 are all predetermined experimental values. If the average curvature of an image is very small, then q_n=0, i.e. the image's quality is good; while if the average curvature is large, then the noise in the image would affect the processing, q_n will equal to 1, 2 or 3 depending on the noise.

<2> The effective area q_a represents the number of global features which define a direction, in the case of Equation (9),

q_a=f_q_a(#{x | (x in gf) & (x<p_π)});

where function f_q_a(z) is increasing for z between 0 and 255, to ensure that q-a is 1, 2 or 3. When q_a equals 0, the quality of the image is good, otherwise the larger the value, the worse the quality.

<3> q_p depends on whether the position of the center

(k0,l0) is in the central region CR of the image, i.e.

q_p=(is (k0,l0) in CR) ? 0 : 1;

In the embodiment,

CR={(k,l) I (L/4 < 1 < L-2/3) & (K/3 < k < K-2/3)}.

<4> q_m depends on the number and average quality of

minutiae a_mq, i.e.

0, if nm < p_nm;

q_m = {

f_q_m(a_mq), otherwise,

where p_nm is a predetermined threshold, f_mq(z) is an increase function for z. In the embodiment, p_nm=18,

f_mq(z)=z/4.

<5> q_h means help level that presents the reliability of the center C by the average curvature around C,

q_h=f_q_h(Σ(c(X), X, NC)/#NC);

where

NC={X I |X-C|≥p_r1 & |X-C|<p_r2.

f_q_h(z) is an increase function for z.

Finally,

q_1=q_n + q_a + q p + q_m + q_h;

It is anticipated that the invention will be implemented by means of a general purpose digital computer system which is programmed in accordance with the algorithms described above and is provided with an appropriate graphics input device capable of scanning a fingerprint image and inputting gray level image point brightness values and displaying and printing and writing the results of the image processing procedures in an output device.

In an embodiment of the above method, the following parameters may have values in the ranges specified below:

Section Parameter Range

3.2 r [5, 30]

P_v [4·r·r, 60 r·r]

3.3 p_nl [1, 5]

P_c_P [0.5, 1] 3.4 p_c1 [0.6, 1]

P_ω0 [1.5, 2.5]

P_ω1 [0.5, 1.5]

P_ω2 [0.5, 1.5]

P_ω3 [0.5, 1.5]

p_c2 [0.6, 1]

p_n2 [1, 10]

p_dd [0.1, 0.6]

3.5 p_d1 [0.2, 0.8]

p_d2 [0.5, 1.5]

p_d3 great than 0

P_l [5, 15]

p_ac [0.6, 1]

3.6 p_c3 [0.6, 1]

P_k [1, 10]

p_d4 [10, 100]

3.8 p_c4 [0.6, 1]

3.9 p_ag [0, 7]

p_ga [0.01, 0.1]

3.10 p_nm [8, 20]

P_l2 [5, 20]

p_rl [1, 10]

p_r2 [10, 20]

The invention thus provides a method for calculating the global difference between two stripe patterns by means of their global features used in finer classification and search.

While the description above refers to particular

embodiments of the present invention, it will be understood that many modifications may be made without departing from the spirit thereof. The accompanying claims are intended to cover such modifications as would fall within the true scope and spirit of the present invention.

The presently disclosed embodiments are therefore to be considered in all respects as illustrative and not

restrictive, the scope of the invention being indicated by the appended claims, rather than the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims

WHAT IS CLAIMED:
1. An automatic method for identifying an image having varying brightness values constituting a pattern of stripes containing minutiae, comprising:
dividing the image into a rectangular matrix of image points and providing a representation of the gray level value of the brightness at each image point;
transforming the representations into a selected brightness range;
calculating the average direction of local texture at each point on the image with a function of gradient models in a neighborhood of the point;
calculating the curvature or the inconsistency of directions of local texture at each point on the image with a function of gradient models in a neighborhood of the point;
separating a useful portion of the image, which contains an accurate representation of the pattern of
stripes, from noisy background;
extracting the global features of the image to represent the direction array by selecting some points on the image and storing the directions at these points;
finding singularities in the useful portion by comparing curvature values of selected matrix points in the useful portion;
producing representations of the shapes of selected stripes of the image based on the curvature and direction values;
locating a standardized coordinate axis system having an origin on the image by locating an intersection of lines normal to selected stripes in a selected region of the image;
producing representations of selected
characteristics of the shape of one of the selected stripes on the basis of the distance between the origin of the
coordinate axis system and points on the selected stripe in selected directions from the origin and producing a representation of a global feature function of the
fingerprint based on a set of values each representing a difference between the direction between the origin of the coordinate axis and a point on a stripe and the direction of the stripe pattern at that point,
determining the locations of minutiae in the image on the coordinate axis system from the transformed
representations of the gray level values of the brightness at each image point; and
producing a representation of the quality level of the image.
2. The method defined in claim 1 wherein the image represents a fingerprint.
3. The method defined in claim 2 comprising employing a quick recurrent algorithm for calculating the direction and curvature with same size as the image, by the steps of:
(1) Calculating four gradient model arrays about 0, 45, 90 and 135 degrees respectively;
(2) Calculating the average gradient models in each neighborhood recurrently;
(3) Using tables instead of operations of arc-tangent, wherein the curvature is a measurement of accuracy about the direction at the same point.
4. The method defined in claim 2 comprising segmenting an octagonal clear region of fingerprint ridges from
background and noise by means of eight straight lines
according to the curvature array.
5. The method defined in claim 2 comprising analyzing the ridge trends around a point in image by means of the difference of local direction and vector direction at each point on the digital circles with various radius and deriving a Fourier transform and reverse transform for deciding the forkedness by power spectrum and finding the trends by filtered differences.
6. The method defined in claim 2 comprising performing line tracing based on direction and curvature values by accumulating errors of coordinates and directions to correct the trace for extracting contour lines, shape lines and normal lines.
7. The method defined in claim 2 wherein the
coordinate axis is located by locating the center and central orientation of the fingerprint macroscopically by means of a vault line and normal lines.
8. The method defined in claim 2 comprising locating the 'a' core of a plain arch and its main trend
macroscopically that are consistent with other types of fingerprints.
9. The method defined in claim 2 wherein the shape features for both loops and whorls are extracted from shape lines, the shape features of a whorl being composed by two parts of two loops referring to two deltas respectively.
10. The method defined in claim 2 comprising producing a general classification by means of relations among shape lines and fine classification by means of shape features.
11. The method defined in claim 2 comprising
identifying global features by angle values on some points selecting from the direction array, the points are located spatially on a lattice or on several circles with a common center.
12. The method defined in claim 2 wherein the minutiae are located in terms of endings or bifurcations of valleys.
13. The method defined in claim 12 wherein valleys are traced and minutiae located on the gray level image by means of contour tracing referring to gray levels, comprising the steps of:
(1) selecting starting point for valley tracing, (2) connecting two terminals of traces which satisfy certain conditions,
(3) changing the gap of valleys.
14. The method defined in claim 2 wherein the quality level of the fingerprint is determined on the basis of quality vectors referring to the position of center, number of minutiae, noise level, and area of clear region in order to decide automatically or suggest the operator for
acceptation, rejection or reinput of the fingerprint image.
15. The method defined in claim 2 further comprising calculating global difference between two fingerprints by means of their global features used in finer classification and search.
PCT/US1992/008446 1991-10-07 1992-10-06 Method and system for detecting features of fingerprint in gray level image WO1993007584A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US772,393 1977-02-28
US77239391A true 1991-10-07 1991-10-07

Publications (1)

Publication Number Publication Date
WO1993007584A1 true WO1993007584A1 (en) 1993-04-15

Family

ID=25094911

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1992/008446 WO1993007584A1 (en) 1991-10-07 1992-10-06 Method and system for detecting features of fingerprint in gray level image

Country Status (2)

Country Link
AU (1) AU2779092A (en)
WO (1) WO1993007584A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1995013591A1 (en) * 1993-11-12 1995-05-18 Jasper Consulting, Inc. Fingerprint analyzing and encoding system
US5748765A (en) * 1992-10-27 1998-05-05 Jasper Consulting, Inc. Modifying a database using a fingerprint form
WO1999004358A1 (en) * 1997-07-18 1999-01-28 Kaba Schliesssysteme Ag Method for determining an identification code from fingerprint images
US6002787A (en) * 1992-10-27 1999-12-14 Jasper Consulting, Inc. Fingerprint analyzing and encoding system
US7003141B1 (en) 2000-06-21 2006-02-21 Robert Bosch Gmbh Method of and device for identification of fingermarks
WO2007010209A1 (en) * 2005-07-15 2007-01-25 Neil Maxwell Rhiando User-identifier
DE10118485B4 (en) * 2001-04-12 2013-10-24 Idencom Ag Automatic identification and / or verification of linear textures such as fingerprints
US9785819B1 (en) 2016-06-30 2017-10-10 Synaptics Incorporated Systems and methods for biometric image alignment
US9785818B2 (en) 2014-08-11 2017-10-10 Synaptics Incorporated Systems and methods for image alignment
US9792485B2 (en) 2015-06-30 2017-10-17 Synaptics Incorporated Systems and methods for coarse-to-fine ridge-based biometric image alignment
EP2660775A4 (en) * 2010-12-27 2017-12-20 Fujitsu Limited Biometric authentication device
WO2018201847A1 (en) * 2017-05-03 2018-11-08 Oppo广东移动通信有限公司 Optical fingerprint identification method, and related product
US10127681B2 (en) 2016-06-30 2018-11-13 Synaptics Incorporated Systems and methods for point-based image alignment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4083035A (en) * 1976-09-10 1978-04-04 Rockwell International Corporation Binary image minutiae detector
US5067162A (en) * 1986-06-30 1991-11-19 Identix Incorporated Method and apparatus for verifying identity using image correlation
US5140642A (en) * 1991-04-23 1992-08-18 Wen Hsing Hsu Method and device for allocating core points of finger prints

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4083035A (en) * 1976-09-10 1978-04-04 Rockwell International Corporation Binary image minutiae detector
US5067162A (en) * 1986-06-30 1991-11-19 Identix Incorporated Method and apparatus for verifying identity using image correlation
US5140642A (en) * 1991-04-23 1992-08-18 Wen Hsing Hsu Method and device for allocating core points of finger prints

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5748765A (en) * 1992-10-27 1998-05-05 Jasper Consulting, Inc. Modifying a database using a fingerprint form
US6002787A (en) * 1992-10-27 1999-12-14 Jasper Consulting, Inc. Fingerprint analyzing and encoding system
US6289111B1 (en) 1992-10-27 2001-09-11 Jasper Consulting, Inc. Modifying a database using an identification form
AU675742B2 (en) * 1993-11-12 1997-02-13 Jasper Consulting, Inc. Fingerprint analyzing and encoding system
WO1995013591A1 (en) * 1993-11-12 1995-05-18 Jasper Consulting, Inc. Fingerprint analyzing and encoding system
AU761123B2 (en) * 1997-07-18 2003-05-29 Kaba Schliesssysteme Ag Method for determining an identification code from fingerprint images
WO1999004358A1 (en) * 1997-07-18 1999-01-28 Kaba Schliesssysteme Ag Method for determining an identification code from fingerprint images
US7003141B1 (en) 2000-06-21 2006-02-21 Robert Bosch Gmbh Method of and device for identification of fingermarks
DE10118485B4 (en) * 2001-04-12 2013-10-24 Idencom Ag Automatic identification and / or verification of linear textures such as fingerprints
WO2007010209A1 (en) * 2005-07-15 2007-01-25 Neil Maxwell Rhiando User-identifier
EP2660775A4 (en) * 2010-12-27 2017-12-20 Fujitsu Limited Biometric authentication device
US9785818B2 (en) 2014-08-11 2017-10-10 Synaptics Incorporated Systems and methods for image alignment
US9792485B2 (en) 2015-06-30 2017-10-17 Synaptics Incorporated Systems and methods for coarse-to-fine ridge-based biometric image alignment
US9785819B1 (en) 2016-06-30 2017-10-10 Synaptics Incorporated Systems and methods for biometric image alignment
US10127681B2 (en) 2016-06-30 2018-11-13 Synaptics Incorporated Systems and methods for point-based image alignment
WO2018201847A1 (en) * 2017-05-03 2018-11-08 Oppo广东移动通信有限公司 Optical fingerprint identification method, and related product

Also Published As

Publication number Publication date
AU2779092A (en) 1993-05-03

Similar Documents

Publication Publication Date Title
Sebastian et al. On aligning curves
Heutte et al. A structural/statistical feature based vector for handwritten character recognition
Grigorescu et al. Distance sets for shape filters and shape recognition
Bhanu et al. Fingerprint indexing based on novel features of minutiae triplets
EP0918300B1 (en) Fingerprint feature correlator
Kauppinen et al. An experimental comparison of autoregressive and Fourier-based descriptors in 2D shape classification
US6072895A (en) System and method using minutiae pruning for fingerprint image processing
Lee et al. Offline tracing and representation of signatures
US5239590A (en) Fingerprint verification method
Singh et al. A new local adaptive thresholding technique in binarization
Jain et al. Filterbank-based fingerprint matching
DE60031929T2 (en) A method for separating signs for the recognition of motor vehicle license plates
Jain et al. Page segmentation using texture analysis
CA1090475A (en) Automatic pattern processing system
KR100447023B1 (en) Biometric recognition using a classification neural network
US6041133A (en) Method and apparatus for fingerprint matching using transformation parameter clustering based on local feature correspondences
Ansari et al. Non-parametric dominant point detection
EP0114250B1 (en) Confusion grouping of strokes in pattern recognition method and system
US4047154A (en) Operator interactive pattern processing system
Vukadinovic et al. Fully automatic facial feature point detection using Gabor feature based boosted classifiers
US20140270419A1 (en) Multi-resolutional texture analysis fingerprint liveness systems and methods
US5465303A (en) Automated fingerprint classification/identification system and method
CA1299292C (en) Character recognition algorithm
EP0551738B1 (en) Method for connected and degraded text preprocessing
EP0858047B1 (en) Method and apparatus for verifying static signatures using dynamic information

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AT AU BB BG BR CA CH CS DE DK ES FI GB HU JP KP KR LK LU MG MN MW NL NO PL RO RU SD SE

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL SE BF BJ CF CG CI CM GA GN ML MR SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase in:

Ref country code: CA