CA2671561A1 - Method and apparatus for extraction and matching of biometric detail - Google Patents

Method and apparatus for extraction and matching of biometric detail Download PDF

Info

Publication number
CA2671561A1
CA2671561A1 CA002671561A CA2671561A CA2671561A1 CA 2671561 A1 CA2671561 A1 CA 2671561A1 CA 002671561 A CA002671561 A CA 002671561A CA 2671561 A CA2671561 A CA 2671561A CA 2671561 A1 CA2671561 A1 CA 2671561A1
Authority
CA
Canada
Prior art keywords
distance
filters
key
recited
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002671561A
Other languages
French (fr)
Inventor
Peter M. Meenen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Snowflake Technologies Corp
Original Assignee
Snowflake Technologies Corporation
Peter M. Meenen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Snowflake Technologies Corporation, Peter M. Meenen filed Critical Snowflake Technologies Corporation
Publication of CA2671561A1 publication Critical patent/CA2671561A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Abstract

A method and apparatus of identification by extracting and matching biometric detail from a subcutaneous vein infrared image. The image's Region of Interest is identified and artifacts are removed. A bank of filters, such as Symmetric Gabor Filters, Complex Gabor Filters, Log Gabor Filters, Oriented Gaussian Functions, or Wavelets, filters the image into a set of key value images that are subdivided into regions. An enrollment key, defined by ordered statistical measures of pixel intensities within the regions, is compared using a distance metric to a stored verification key. Various statistical measures may be used, such as variance, standard deviation, mean, absolute average deviation, max value, min value, max absolute value, median value, or a combination of these statistical measures. Various distance metrics may be used, such as Euclidean, Hamming, Euclidean Squared, Manhattan, Pearson Correlation, Pearson Squared Correlation, Chebychev, or Spearman Rank Correlation.

Description

2 [0001] Method and Apparatus for Extraction and Matching of Biometric Detail
3 TECHNICAL FIELD
4 [0015] The present invention relates, in general, to identification of individuals using biometric information and, in particular, to identification and authentication of individuals 6 using subcutaneous vein images. From a technical point of view, the in.vention addresses a 7 method of identifying a person by extracting and matching biometric detail from a 8 subcutaneous vein infrared image of a person.

[0020] Biometrics, which refers to identification or authentication based on physical or 11 behavioral characteristics, is being increasingly adopted to provide positive identification with 12 a high degree of confidence, and it is often desired to identify and/or authenticate the identity 13 of individuals using biometric information, whether by 1:1 (one to one) authentication or l:n 14 (one to many) matching/identification. It shall be understood that the terms "identify" and "identifying", as used herein, refer both to authentication (verification that a person is who he 16 or she purports to be) and to identification (determining which of a set of possible individuals a 17 person is). Prior art solutions are known that use biometric information from iris images, 18 images of palm print creases, and fingerprint images.
19 [0030] Daugman, U.S. Patent 5,291,560 (issued March 1, 1994), discloses performing biometric identification using analysis of oriented textures in iris images with a Hamming 21 Distance metric, and it is known to use fixed-length keys when performing biometric 22 identification based upon iris images.
23 [0040] Zhang et al., U.S. Patent Application Publication No. 2005/0281438 (published 24 December 22, 2005), discloses biometric identification using analysis of images of palm print creases with a neurophysiology-based Gabor Filter and an angular distance metric.
26 [0050] Jain, A.K.; Prabhakar, S.; Hong, L.; and Pankanti, S., "Filterbank based 27 Fingerprint Matching", IEEE Trans. on hnage Processing, pp. 846-859 (Vol.
9, No. 5, May 28 2000), discloses biometric identification with limited success using analysis of fingerprint 29 images with Gabor Filters and a Euclidean distance metric.
[0060] Lee, Chih-Jen; and Wang, Sheng-De, "A Gabor Filter-Based Approach to 31 Fingerprint Recognition", 1999 IEEE Workshop on Signal Processing Systems, pp. 371-378 32 (Oct. 1999), discloses using a Gabor filter-based method to do local ridge orientation, core 33 point detection, and feature extraction for fmgerprint recognition.
34 [0070] Jain, A.K.; Prabhakar, S.; Hong, L.; and Pankanti, S., "FingerCode:
A Filterbank for Fingerprint Representation and Matching", Proc. IEEE Conf. on CVPR, pp.
187-193 (Vol.

.. ._....
1 2, June 23-25, 1999), discloses using a bank of Gabor filters to capture fingerprint details and 2 performing fingerprint matching based on an Euclidean distance metric.
3 [00801 Prabhakar, S., "Fingerprint Classification and Matching Using a Filterbank", Ph.D.
4 Dissertation, Michigan State University (2001), discloses feature extraction and filterbank based matching of fingerprints using various algorithms.
6 [0090] Jain, A.K; Prabhakar, S.; and Hong, L., "A Multichannel Approach to Fingerprint 7 Classification", IEEE Transactions on PAMI, pp. 348-359 (Vol. 4, April 1999), discloses 8 classifying fingerprints by filtering an image of a fmgerprint by a bank of Gabor filters with a 9 two-stage classification.
[0100] Horton, M.; Meenen, P.; Adhami, R.; and Cox, P., "The Costs and Benefits of 11 Using Complex 2-D Gabor Filters in a Filter-Based Fingerprint Matching System", 12 Proceedings of the Thirty-fourth Southeastern Symposium on System Theory, pp. 171-175 13 (March 18-19, 2002), discloses applying two-dimensional Gabor filters to fingerprint images 14 for matching fingerprints.
[0110] Zeman et al., U.S. Patent Application Publication No. 2006/0122515 (published 16 June 8, 2006); Zeman, U.S. Patent Application Publication No. 2004/0111030 (published June 17 10, 2004); and Zeman, U.S. Patent 6,556,858 (issued April 29, 2003), fully incorporated herein 18 by reference, disclose using infrared light to view subcutaneous veins, with subsequent re-19 projection of the vein image onto the surface of the skin, but does not disclose identification or authentication of individuals using the vein images.
21 [0120] Cross, J.M.; and Smith, C.L., "Thermographic Imaging of the Subcutaneous 22 Vascular Network of the Back of the Hand for Biometric Identification", Proc. IEEE 1995 Int'l 23 Carnahan Conference on Security Technology, pp. 20-35 (Oct. 18-20, 1995), discloses making 24 an infrared image of subcutaneous veins on the back of the hand and then segmenting the vein pattern to obtain a medial axis representation of the vein pattern. Contrast enhancement, 26 filtering to remove hair and artifacts, and separation of the hand from a background is 27 disclosed. The medial axis representations are compared against stored signatures in a 28 database.
29 [0130] Im, S.; Park, H.; Kim, S.; Chung, C.; and Choi, H., "Improved Vein Pattern Extracting Algorithm and Its Implementation", Int'1 Conf. on Consumer Electronics - Digest of 31 Technical Papers, pp. 2-3 (June 13-15, 2000), discloses extracting a region of interest ("ROI ') 32 from a vein image, using a Gaussian low-pass filter on the ROI image, and using a modified 33 median filter to remove noise in the image caused by hair, curvature, and thickness of fatty 34 substances under the skin.
[0140] Lin, C.; and Fan, K., "Biometric Verification Using Thermal Images of Pahn-Dorsa 36 Vein Patterns", 14 IEEE Trans. on Circuits and Systems for Video Tech., pp.
199-213 (Feb 1 2004), discloses obtaining thermal images of pahn-dorsa vein patterns, extracting a region of 2 interest (` ROI '), and using moment filters to extract feature information about intensity, 3 gradient, and direction features.
4 [0150] Tanaka, T.; and Kubo, N., "Biometric Authentication by Hand Vein Patterns", SICE Annual Conf. in Sapporo, pp. 249-253 (Aug. 4-6, 2004), discloses obtaining near-6 infrared hand vein images, contrast-enhancing the images, and using phase-only correlation 7 and template matching as a recognition algorithm.
8 [0160] Zhang, Z.; Wu, D.Y.; Ma, S.; and Ma, J., "Multiscale Feature Extraction of Finger-9 Vein Patterns Based on Wavelet and Local Interconnection Structure Neural Network", Int'l Conf, on Neural Networks and Brain, pp. 1081-1084 (Oct. 2005), discloses obtaining near-11 infrared images of finger veins, and using multi-scale self-adaptive enhancement transforms on 12 the images using a wavelet analysis. A neural network is iteratively trained to perform 13 recognition.
14 [0170] MacGregor, P; and Welford, R., "Veincheck: Imaging for Security and Personnel Identification", 6 Advanced Imaging, pp. 52-56 (1991), discloses using infrared images of back 16 of hand subcutaneous vein patterns whose nodes and connectivity mapped onto a hexagonal 17 grid as a biometric identifier, using a histogram for verification.
18 [0180] Current vein-based biometric systems, as, for example, disclosed in Choi, U.S.
19 Patent 6,301,375 (issued October 9, 2001), fully included by reference herein, utilize information such as points where veins intersect or cross, or, as disclosed in Clayden, U.S.
21 Patent 5,787,185 (issued July 28, 1998), fully included by reference herein, utilize 22 directionally-weighted vector representations of the veins, or other so-called "point-based"
23 techniques well-known in the prior art.
24 [0190] A point-based vein biometric system can be defined as a system that performs biometric identification based on a selected series of critical points from a vein structure, for 26 example, where the veins branch or wllere veins have maximal points of curvature. The 27 typical approach to finding these points involves first segmenting the vein structure from the 28 rest of-he image. The segmented vein structure is then typically reduced to a binary image and 29 subsequently thinned to a series of single pixel lines. Froin this thinned version of the vein structure, vein intersection points can be easily identified. Other features, such as line 31 curvature and line orientation, are also easily determined. The positions of these critical points 32 along with other measures describing them (for example orientation angle or curvature value) 33 are arranged into a vector and stored. Because these systems often miss some points or detect 34 new points when processing different images of the same vein structure, the vectors that are constructed are of variable length, which makes quick database searches difficult.
36 [0200] When performing point-based matching, the input point set is first compared to a 1 reference point set during an alignment phase. This typically occurs through the use of an 2 affine transform., or similar method. Following the alignment of the points, a search is 3 conducted for approximate correspondences between points from different keys. The total 4 maximum number of corresponding points between the two key vectors is determined and from this a score is calculated. The score is compared to a threshold value and a decision is 6 made as to whether a match has occurred.
7 [0210] While these point-based techniques are usable, they pose many problems. Due to 8 sensor noise and other negative factors, there is no guarantee that the sanle set of points will be 9 extracted each time an individual is authenticated/identified. Thus, such prior art approaches must be flexible and allow for missing and added point locations, which prevents them from 11 being able to construct fixed-length keys that are always ordered in a uniform manner. As a 12 result, the matching process is drastically complicated and it becomes difficult to quickly 13 searcll large databases using approaches taught by the prior art.
14 [0220] It is therefore desirable to have a method and apparatus for biometric identification and authentication that extracts biometric detail from vein images to form keys of fixed size 16 and constant order so that key comparison may be quickly and efficiently performed. It is 17 further desirable to reduce the computational difficulty of key comparison, and to improve the 18 speed of matcliing, by using key subsets to identify possible match candidates, and then only 19 performing full key comparisons on those possible match candidates.
[0230] None of these prior art references, either singly or in coinbination, disclose or 21 suggest the present invention.

23 [0500] The present invention uses a series of filters to extract useful information from an 24 image containing subcutaneous vein patterns. A region of interest ("ROI") of an image containing subcutaneous vein structures, obtained from a vein imaging device, is processed 26 using a plurality of filters that are selective in both orientation and spatial frequency. Once 27 processed, statistical measures are taken from a plurality of regions within each of the resulting 28 filtered images. These statistical measures are then arranged in a specific order and used as a 29 uniquely-identifying code that can be quickly and easily matched against other codes that were previously acquired. Due to the uniform key size and constant ordering of the values, a metric 31 as simple as a Euclidean Distance or preferably a Pearson Correlation Distance may be used to 32 determine key similarity.

33 [0520] The present invention extracts detail from images of subcutaneous veins. This 34 extracted detail is used to form a fixed-length key of statistical values that are then ordered in a preselected manner. The present invention enables rapid matching and searching of databases 1 of fixed-length biometric keys generated by the present invention. One use for the present 2 invention is for one to one and one to many biometric comparisons of subcutaneous vein 3 patterns.
4 [0540] The present invention has numerous advantages. The method of the invention produces a fixed-length biometric key based on biometric detail extracted from subcutaneous 6 vein images. The key, being of fixed length and in a constant order, permits rapid 1:1 (one to 7 one) matching/authentication and makes the process of l:n (one to many) 8 matching/identification extremely simple.
9 [0560] The present invention also has the advantage of being able to simultaneously capture detail information relating to not only the position of veins, but also information 11 relating to their size and orientation. This is due primarily to the fact that the filters applied to 12 the image can be tuned in size, spatial frequency, and orientation. In any biometric system, the 13 more information that can be captured relating to the feature in question, the better chance the 14 system has of performing accurate matching for identification and autlientication.
[0580] In the case of 1:n matching implementations, the present invention provides many 16 benefits. First, since the key is of fixed size and in a constant order, the matching process is 17 simpler, and as a result, matches can be performed more quickly. This allows a brute-force 18 comparison with an entire database of keys to execute more quickly than would be possible 19 under other prior art approaches. The present invention also allows for a more refined searching approach through a quick reduction of the size of the database that must be searched 21 by matching on a subset of the key rather than on the full key. For example, to quickly narrow 22 the search field down to a smaller subset of records, a comparison can be preformed using a 23 smaller key generated from a subset of the filtered images such as, for example, a few 24 strategically chosen filters. This results in fewer calculations than would have to be performed against all the records in the database. The fall key can then be compared against the 26 remaining records. In addition, by indexing the database of keys based upon specific features 27 of the various sub-keys, comparisons against key values known to be substantially different 28 from the key in question can be skipped.
29 [0600] It is an object of the present invention to provide an apparatus and method for identifying a person by extracting and matching biometric detail from a subcutaneous vein 31 image of the person. It is a farther object of the present invention that the identification be 32 rapid and efficient.

34 [1000] Fig. 1 is a schematic diagram of a preferred einbodiment of the apparatus of the present invention, showing imaging of veins on a hand.
-5-
6 PCT/US2006/043091 11010] Fig. 2 is a view of a region of interest ("ROI") of veins in an image.
2 [1020] Fig. 3 is a flowcharE showing steps in the preferred embodiment of the method of 3 the present invention.
4 [10301 Fig. 4 is a flowchart showing steps in the image preprocessing of Fig. 3.
[1040] Fig. 5 is a flowchart showing steps in the contrast enhancement of Fig.
4.
6 [1050] Fig. 6 is a graph of the one-dimensional Mexican Hat Wavelet for t=-32 to +32
7 andQ= 8.
8 [1060] Fig. 7 is a two-dimensional directional (oriented) filter constructed from the
9 Wavelet shown in Fig. 6.
[1070] Fig. 8 shows a representative pre-processed image before filtering.
11 [1080] Fig. 9 shows the image of Fig. 8 after filtering with an Even-Symmetric Gabor 12 Filter.
13 [1090] Fig. 10 shows the image of Fig. 8 after filtering with two-dimensional Oriented 14 Mexican Hat Wavelet Filter.
[1100] Fig. 11 shows how an image is processed into a key using the method of the present 16 invention.
17 [1110] Fig. 12 is a flowchart showing steps in the key matching/verification.
18 [1120] Fig. 13A shows an image and Fig. 13B shows the resulting key produced by the 19 image of Fig. 13A using the method of the present invention.
[1130] Fig. 14A shows another image from the same person as Fig. 13A but taken from a 21 slightly different view, and Fig. 14B shows the resulting key produced by the image of Fig.
22 14A, showing how similar images (Figs. 13A and 14A) produce similar keys (Figs. 13B and 23 14B).
24 [1140] Figs. 15A and 15B are the same as Figs. 13A and 13B, and are for comparison purposes with the different image of Fig. 16A that produces the different key of Fig. 16B, 26 showing how dissimilar vein patterns generate dissimilar keys.
27 [1150] Figs. 17A, 18A, 19A, and 20A are different images with respective keys 17B, 18B, 28 19B, and 20B, for purposes of showing how similar images generate similar keys. The images 29 of Figs. 17A and 18A are somewhat similar, while the images of Figs. 19A
and 20A are very different from each other and from the images of Figs. 17A and 18A.
31 [1160] Figs. 21, 22, 23, and 24 show the matcll scores for the keys of Figs. 17B, 18B, 19B, 32 and 20B using various distance metrics. All match scores except those for the Pearson 33 Correlation (Fig. 24) are normalized (divided) by the length of the key.
34 [1170] Fig. 25 shows comparison of two key subsets ("sub-keys") using the keys shown in Figs. 17B and 18B.
36 [1180] Fig. 26 shows comparison of one key subset against key subsets in a database of 1 stored keys.
2 [1190] Fig. 27 shows rapid evaluation of one key subset against a database of stored keys 3 indexed by subset key value counts to determine eligible keys for subsequent distance 4 comparison.
[1200] Fig. 28 shows the combination of the techniques of Figs. 26 and 27 for rapid 6 evaluation of one key subset against a database of stored keys indexed by subset key value 7 counts, which determines subsets eligible for subsequent key subset distance comparison, 8 which determines keys eligible for full key distance comparison.

[2000] It is known in the prior art that skin and some other body tissues reflect infrared 11 light in the near-infrared range of about 700 to 900 nanometers, while blood absorbs radiation 12 in this range. Thus, in video images of body tissue taken under infrared illumination, blood 13 vessels appear as dark lines against a lighter background of surrounding flesh. However, due 14 to the reflective nature of subcutaneous fat, blood vessels that are disposed below significant deposits of such fat can be difficult or impossible to see when illuminated by direct light, that 16 is, light that arrives generally from a single direction.
17 [2010] When an area of body tissue having a significant deposit of subcutaneous fat is 18 imaged in near-infrared range under illumination of highly diffuse infrared light, there is 19 significantly higher contrast between the blood vessels and surrounding flesh than when the tissue is viewed under direct infrared illumination. It appears that most of the diffuse infrared 21 light reflected by the subcutaneous fat is directed away fiom the viewing direction. Thus, 22 when highly diffuse infrared light is used to illuminate the tissue, the desired visual contrast 23 between the blood vessels and the surrounding flesh is maintained. It should be noted that the 24 infrared illumination can be reflective or transmitted, and that equivalent results can be achieved by iliuminating the tissue with broad-spectrum light and then filtering out light that is 26 outside the infrared before capturing an image of the illuminated tissue.
27 [2020] Briefly, before the details are fully explained, the method of the preferred 28 embodiment of the invention has the steps shown in Fig. 3, and the apparatus of the present 29 invention also operates according to the flow chart shown in Fig. 3. The steps include an input image 24 for which a region of interest ("ROI") 30, perferably 400 x 400 pixels, has been 31 identified and cropped; subdivision of the ROI into a tessellation pattern (such as preferably a 32 20 pixel x 20 pixel square grid region, but polar sector regions, triangular regions, rectangular 33 regions, or hexagonal regions, etc., may also be used) so as to form a plurality of regions; a 34 bank of filters 40 (which can be selected from a wide variety of compatible filter types including, as described herein, Symmetric Gabor Filters, Complex Gabor Filters, Log Gabor 1 Filters, Oriented Gaussian Functions, Adapted Wavelets (as that term is defined and used 2 herein), etc.); and a statistical measure formed for each region in the tessellation pattern 3 (preferably, statistical variance of the pixel intensities in the region, but there are numerous 4 other statistical measures that could be used, including standard deviation, mean, absolute average deviation, max value, min value, max absolute value, and median value). It should 6 also be noted that filters using the same orientations but different frequency values can be used 7 to bring out details of different sizes i.e., larger and smaller veins.
Also, more than one of 8 these statistical measures could be used to increase the key size and to improve accuracy. For 9 example, variance and mean values could be taken for each area and together arranged into a key. Finally, a comparison metric is used to compare the distance between an enrollment key 11 and a stored verification key.
12 [2030] Referring to the figures of the drawings and especially to Figs. 1-5, apparatus 20 for 13 identifying a person is seen to include means 22, such as a well-known infrared camera, for 14 capturing a subcutaneous vein infrared image 24 of the person. It shall be understood that the term "veins", as used herein, is used to generically refer to blood vessels such as capillaries, 16 veins, and arteries. A portion of the person, such as a hand 26, is illuminated by infrared light 17 and then the image is captured by a camera 22 or other well-known sensing device.
18 Alternatively and equivalently, broad-spectrum light (such as room light) could be used for 19 illumination, and a suitable infrared filter could be placed in front of camera 22 to cause only the infrared image to be seen by camera 22. Camera 22 may include, as is well-known to those 21 skilled in the art, charge-coupled device ("CCD") elements, as are often found in well-known 22 CCD / CMOS cameras, to capture the image and pass it on to a well-known computer 28.
23 [2040] While the present disclosure uses the example of vein patterns on the back of the 24 hand for purposes of illustration, it should be understood that the present invention is easily adapted to work on other parts of the body where subcutaneous veins may be viewed.
26 [2050] After the image 24 has been captured, preprocessing 35 is preferably performed on 27 the image, and the preferred preprocessing steps of Fig. 3 are shown in greater detail in the 28 flowchart of Fig. 4. In the preprocessing phase 35, a region of interest ("ROI") 30 of the image 29 24 is identified for which processing will be performed. Preferably, this is done by segmenting the person's body part, e.g., hand 26, from any background iinage. Then, certain landmarks or 31 feature locations, such as fingertip points 32 or preferably points 34 on the inter-finger 32 webbing, are located. Fingertip points 32 are less desirable for landmarks than are inter-fmger 33 webbing points 34 because finger spread will cause greater variability of the location of 34 fingertip points 32 with respect to ROI 30 than of the location of inter-finger points 34.
[2060] Based on the locations of these landmarks, the image is adjusted to a pre-defined 36 location and orientation, preferably with adjustments for scaling, horizontal and vertical 1 translation, and rotation, so that the ROI 30 will present a uniform image area of interest for 2 evaluation. Once the image 24 has been thus scaled, translated, and oriented, the fixed ROI
3 area 30 is defined, extracted from the image, and the remainder of the image 24 is discarded.
4 [2070] More specifically, the ROI 30 is identified as follows: First, the raw image 24 is received from the image capture means 22. Preferably a dark background is placed below the 6 imaged hand 26 so that background pixels will be darker than foreground pixels of the hand. A
7 histogram of the iinage 24 is taken, and the histogram is analyzed to find two peaks, one peak 8 near zero (the background) and one peak in the higher-intensity range (the hand), and an 9 appxopriate threshold is determined that will discriminate the two peaks. If more than two peaks are found, the bin count is reduced by 5 until a two-peak histogram is achieved. If two 11 peaks still cannot be found, a default threshold of 10 greater than the minimum pixel intensity 12 (i.e., somewhat above the background intensity) is used. When two peaks are found, the 13 threshold is set to the minimum value between these two peaks, and a binary mask is created 14 with pixels having an intensity above the threshold being set to 1 and those equal or below the threshold set to 0.
16 [2080] The inter-finger points 34 are then located by tracing along the edge of the binary 17 hand outline while noting local maximum and minimum elevations, where a"high" elevation 18 is defmed as being closer to the top of the image (using an orientation for the image capture 19 means 22 such that the fingertips 32 are generally oriented toward the "top" of the image). A
minimum elevation (closer to the "bottom" of the image) between two maximum elevations 21 thus indicates an inter-finger point 34 on the inter-finger webbing. Once the inter-finger points 22 34 are located, an affine transform is used to rotate and scale the image 24 to match a set of 23 pre-determined standardized points, with transform coefficients being determined using a least-24 squares mapping between the points on the imaged hand and the pre-determined standardized points.
26 [2090] The region of interest (ROI) 30 is then determined to be a 400 x 400 pixel region 27 anchored at the inter-finger point 34 furthest from the thumb, noting that the thumb's inter-28 finger point has the lowest elevation in the y direction. To ensure an extra image border for 29 padding in future filtering steps, a band of 100 pixels is preserved around the outside of the ROT when such a band of bordering pixels is available on the image. If there are fewer than 31 100 bordering pixels outside the ROI, existing pixels within the ROI are reflected to fill in 32 these gaps of missing pixels.
33 [2100] Apparatus 20 is thus seen to include means 36 for identifying a region-of-interest.
34 [2110] Optionally, but preferably, the ROI image 30 may also be preprocessed (filtered) to remove artifacts in the image such as hair, as by well-known artifact removal means 38, to 36 create an artifact-removed image 24'. This is not only to provide a more stable image for 1 comparison, but also to prevent attempts by individuals to avoid identification by shaving the 2 hair off of parts of their body. There are many ways to do artifact removal, but an adequate 3 approach has been found to be by use of a simple and well-known 10 x 10 median filter.
4 While this causes loss of some vein detail, the veins on the back of the hand (where hair removal is most needed) are large enough to easily survive a filter of size 10 x 10.
6 [2120] As a part of this preprocessing to remove artifacts, adaptive contrast enhancement 7 39 is then preferably performed on the image 24' using steps as shown in detail in Fig. 5. An 8 algorithm is used that first applies a blur filter 41 to the image to create a blurred version of the 9 image, and then this blurred image is subtracted from the original image to create an unsharp version 24a of the image. The absolute value 43 is then taken of this unsharp image 24a, the 11 absolute value processed image is then blurred 46, and the original unsharp image is divided 48 12 (point by point) by this blurred absolute value image, producing a contrast-enhanced image 13 24". Additional image smoothing may also be used to clean up image artifacts produced by 14 the hair removal. The "Pre-Processed Image" shown in Fig. 11 is an example of an initial i.inage, shown at the top of Fig. 11, to which only contrast enhancement preprocessing has been 16 done without removal of hair artifacts. Image pre-processing, while preferred, is not essential 17 to the present invention.
18 [2130] It has been observed, however, that the approach of the present invention is 19 sensitive to changes in the positions of vein detail in the image. In other words, the images that are compared by the present invention must be aligned very closely in order to allow for 21 optimal matching. A good alignment method in the pre-processing stage is essential so that the 22 ROI 30 location will be consistent. Proper alignment and location of the ROI 30 has been 23 determined to be straightforward to accomplish on areas of the body that have a good set of 24 landmark features, for example, the face, hands, etc., and feature matching is well-known to those skilled in the art.
26 [2140] With the pre-processing stage 39 completed, the ROI portion 30 of the image is 27 ready for the application of a first plurality of enhancement filters 40.
It shall be understood 28 that this plurality of filters 40 may be implemented serially, at the expense of greater elapsed 29 filtering time, or in parallel, at the expense of greater concurrent processing requirements. The preferred embodiment, for each of the filters 40, uses an Even Symmetric Gabor filter, which is 31 of the form:

-1 [x(sin(B)~+y(cos(9V + kcos(B~~-Xsin(8f 2 (J2 32 g(x,y)=e (gy [cos(27f[x(sin(e))+y(cos(e))D]
-10-1 where g(x, y) is the spatial representation of the Gabor filter, 8 is the orientation angle of the 2 desired filter, f is the desired spatial frequency, and S and S represent the standard 3 deviations of the Gaussian envelope in the x and y directions respectively.
4 [2150] In the preferred embodiment, eight Gabor filters of size 65 pixels x 65 pixels are employed with the following filter parameters:

6 f=40, Sx=16, Sy=16, 0=Ã0 ,22.5 ,45 ,67.5 ,90 ,112.5 ,135 ,157.5 ~

7 These eight filters are then independently applied to the pre-processed image 24" to yield eight 8 separate filter outputs 50 as shown in Fig. 11.
9 [2160] Filters other than Even Symmetric Gabor filters may be substituted for one or all of the filters 40, such as, for example, Complex Gabor Filters, Log Gabor filters, Oriented
11 Gaussian Filters, and, as hereinafter explained in greater detail, filters constructed from
12 Wavelets. Preferably all of the filters 40 are of the same class of filters but differing in
13 orientation and/or frequency, and it shall be understood that other filters may be substituted
14 that enhance image features for a given orientation angle. The well-known equations for several exemplary filters other than Even Symmetric Gabor filters will now be given.
16 [2170] The Complex Gabor Filter has the form:

1 [x(sin(8))+y(cos(8))] 2 +[x(cos(8))-y(sin(8))12 2 (Sx )2 (Sy ~2 e 17 g(x,Y) = e \ j 2 , z f ( x sin (8 )+y cos(8)) 18 where g(x, y) is the spatial representation of the Gabor filter, 0 is the orientation angle of the 19 desired filter, f is the desired spatial frequency (in degrees), and ~ and ~ represent the standard deviations of the Gaussian envelope in the x and y directions respectively.
21 [2180] The Log-Gabor filter is typically defined in the frequency domain.
If spatial 22 filtering is perfonned, the special filter is determined via inverse FFT.
The frequency-domain 23 representation of a 2-D Log-Gabor filter is:

ln(r {u, v w 6 2 e(u,vf 21n 1 - 2 1 LG{u,v} = e w e 26 2 2 Where LG(u, v) is the frequency domain representation of the log-Gabor Filter, co is the 3 desired angular frequency, r(u, v) represents the radius of a given point in the filter from the 4 filter's center, B(u, v) is represents the desired orientation angle of the filter, 61 represents the spatial frequency bandwidth of the filter, and 62represents the orientation bandwidth of the 6 filter.
7 [2190] The Oriented Gaussian Filter has the form:

I [x(sin(9)) + y(cos(g))]2 + IX(C0S(0))-Y(Sin(0))f_ 8 ag(x,y) = e 2 (gxf (gyf 9 where og(x, y) is the spatial representation of the oriented Gaussian filter, 0 is the orientation angle of the desired filter, x 8 is the filter bandwidth in the x direction, and 8 is the filter v 11 bandwidth in the y direction.
12 [22001 A Gabor filter is a preferable implementation of the key generation filter for the 13 present invention because it is very effective at providing directionally selective enhancement.
14 Most common two-dimensional wavelets are not extremely useful as directionally-enhancing filters because they typically provide little to no ability to easily select an orientation direction.
16 There are, however, several one-dimensional wavelets that can, as hereinafter described, be 17 adapted in such a way as to make them useful as directional key generation filters for the 18 present invention. The identities of some of these wavelets and the strategy that can be 19 employed to adapt them for use with the present invention can now be explained. The term "Adapted Wavelets", as used herein, shall be understood to refer to one-dimensional wavelets 21 adapted in accordance with the present invention, in a manner that can now be described in 22 detail.

. .... ...
1 [2210] To be useful as key generation filters for the present invention, a wavelet filter must 2 be capable of being oriented to a specific angle and must be scalable so that it can detect 3 objects of varying size. By their very nature, wavelets are scalable and thus readily adaptable 4 for detecting objects of varying size. Adaptation of a wavelet to be angularly selective for use with the present invention can be done in a manner that will now be described.
6 [2220] First, a one-dimensional wavelet is selected that has desired properties. For 7 example, a Mexican Hat Wavelet has the following equation:

_t2 V f ( t ) = 1 1_ t2 e 2c2 9 where t is time and Q is the standard deviation. This one-dimensional wavelet has a graph as shown in Fig. 6 for t=-32 to 32 and or = 8.
11 [2230] Then a two dimensional directional filter is created for an angle of zero degrees by 12 repeating the one dimensional wavelet on every column of the two dimensional filter matrix as 13 shown in Fig. 7.
14 [2240] Next, a rotation operator is applied to the filter to orient it in the desired angle. An example of such a rotation operator utilizes a basic rotation matrix which is defined as:

x' cosI/8~ -sin(8) x 16 Lv']Lsin(9) cosO8~

17 where x' and y' represent the new filter coordinates, x and y represent the current filter 18 coordinates, and 0 is the angle of rotation. By performing this rotation for each desired angle 19 of orientation, a series of directionally-enhancing filters are thus constructed. When applied to the image, these filters have a result similar to that of the Gabor filter used in the preferred 21 embodiment of the invention. For example, the following demonstrates a 135 degree filter 22 constructed as just described. The result of filtering an image (shown in Fig. 8) using this filter 23 is shown as Fig. 10 next to, for comparison, the results shown in Fig. 9 of using a similarly-24 oriented Gabor filter.
[2250] There are several one dimensional wavelets that will work with the previously 26 described method of oriented two-dimensional wavelet filter generation froin a one-1 dimensional wavelet. Some of these include:
2 [2260] The Mexican Hat Wavelet, which has an equation of the form:

-t2 V~~ t ` 1 _t2 e2a-2 [a3 6 2 4 where t is time and or is the standard deviation.
[2270] The Difference of Gaussians Wavelet (which can be used to approximate the 6 Mexican Hat Wavelet), which has an equation of the form:

t-,c.~ t-,u22 2o~ 202 7 e 1 - 1 2 U. 2ic 2it ~ 2 e 8 where 61 and a2 are standard deviations and Ail and P2 are mean values.
9 [22801 The Morlet Wavelet, which has an equation of the form:

I

_3~ 2 - _t2 c~
yr(t= l+e 2 -2e 4 Tc 4e 2 ez~ -e 2 11 where t is time and a is the standard deviation.
12 [2290] Hermitian Wavelets, which are a family of wavelets of which the Mexican hat is a 13 member. The n tiZ Hermitian wavelet is simply the n th derivative of a Gaussian, and has an 14 equation of the form:

n - 1 t2 1 Vn (t) _(2n)- 2 c nHn ne 2n 2 where H z represents the tztil Hermite polynomial and cn is given by:
-n 2 3 Cfa- n2 r n+

4 [2300] Additionally, discrete one-dimensional wavelets, such as the well-known Haar, Daubechies, Coiflet, and Symmlet wavelets, may also and equivalently be used.
These 6 wavelets are typically defined as a series of discrete values for which tables are well known to 7 those skilled in the art.
8 [2310] The adapted wavelets heretofore described are intended to be examples of one-9 dimensional wavelets that can be adapted in accordance with the present invention to perform directional filtering enhancement of images in the matter heretofore described. Other one-11 dimensional wavelets having similar characteristics could be used in the manner heretofore 12 described without departing from the spirit and scope of the present invention.
13 [2320] Now that the filters have been applied to the region of interest, an enrollment or 14 first key 42 is generated.
[2330] The output of each filter is divided into a plurality of regions (20 x 20 pixels in the 16 preferred embodiment), with each region having at least one pixel therewithin, and with each 17 pixel having a pixel intensity. It should be understood that this region block size will change 18 depending on the size of the features to be extracted.
19 [2340] For each region of each subdivided filter output, a statistical measure, preferably the statistical variance, of the pixel intensity values within the region is calculated. Note that, 21 while the statistical measure used in the preferred embodiment is the statistical variance, many 22 other statistical measures can be used, including standard deviation, mean, absolute average 23 deviation, etc. In fact, it is possible to construct an enrollment key by using several of these 24 measures together, yielding several statistical measures for each region.
The important feature of the statistical measure is that areas of the image with high variance represent areas that were 26 enhanced by the filter while areas of low variance were not. Thus, areas of high variance are 27 statistically likely to represent the presence of a vein in an orientation siniilar to that of the
-15-1 filter. The magnitude of the variance is also an indicator of how closely the angle in which the 2 vein is running matches the angle of the filter. Veins that run at an angle reasonably close to 3 that of the filter will still show some response, but veins running at exactly the same angle will 4 show a much larger response.
j2350] The statistical measures of the regions are then ordered so as to define an 6 enrollment key vector 42, as by storing them in an array in a pre-set order.
For meaningful 7 comparison, it is essential that the ordering of the statistical measures of the enrollment key 8 match the ordering of the statistical measures in the stored verification keys. If key size is of 9 concern, the variance values may be scaled so that the largest is equal to 255 and the smallest is equal to zero, thereby allowing each value to occupy only one byte of storage space. These 11 regions are visually represented in the drawings as patches of varying intensity, with black 12 patches have a value of zero and white patches having a value of 255, within the eight key 13 subsets that together comprise key vector 40 shown in Fig. 11. For additional storage space 14 reduction, the keys can be reduced to a binary representation by applying a threshold value. In this binary version of the key, each block representation occupies only one bit, and thus, a
16 large reduction in storage space is achieved. This is done at the cost of a reduction in matching
17 accuracy, however.
18 [23601 The enrollment key 42 may then be stored to a disk 44 (joining a database of
19 verification or second keys) or matched against an existing verification key to perform a verification or identification function.
21 [2370] The process of matching two keys (i.e., enrollnzent and verification keys) is 22 straightforward, and there are multiple ways that key matching can be performed. In one 23 embodiment, a simple Euclidian distance calculation is performed on the keys as a whole. In 24 other words, if the first (or enrollment) key is represented as:

key1- lxl' x2"' ' xfa 1 26 and the second (or verification) key is represented as:
27 IzeY2 ={Yl, Y2' K'Yn ~~

28 the Euclidean Distance is determined as:
In 2 29 d= (xi yi J
z=1 1 [2380] A flowchart of the matching steps performed by the present invention is shown in 2 Fig. 12. The generated enrollment, or first, key is compared to a stored verification, or second, 3 key using the chosen distance metric 52 as described above, and the determination of whether 4 the two keys match is made using a preselected threshold distance comparison 54. If the distance between the two keys is larger than the preselected threshold distance, they do not 6 match. If the calculated distance is below the threshold, the keys do match.
7 [23901 In practice, threshold values are set after running a tuning subset of vein images 8 through the apparatus / method of the present invention and evaluating the resulting scores. A
9 threshold score is then chosen to best reflect chosen security goals for false positive (acceptance/match) and false negative (missed match). For example, a preferred 11 implementation using a Pearson Correlation, described in greater detail hereinbelow, and 12 which has the advantage of being a normalized metric, utilizes a threshold score of 0.5.
13 Anything below this distance (score) is a match, and anything above this distance is a non-14 match. Typical scores for matching keys have been found to range from about 0.15 to 0.3 and typical scores for non-matching keys have been found to range from about 0.6 to 1Ø Each of 16 the distance metrics (scoring methods) described herein produces an output with a slightly 17 different numerical range, and it is necessary that a particular implementation of the present 18 invention determine acceptable match and non-match score thresholds that reflect the desired 19 security goals of the implementation.
[2400] The matching step can be repeated across the database for a one to many match, or 21 for more sophisticated matching for one to many approaches, some of the methods of database 22 indexing using properties of the key could be employed. An example of keys generated from 23 similar but non-identical vein images can be seen by comparison of Fig. 13A
with Fig. 14A
24 and of Fig. 13B with Fig. 14B. Fig. 13A shows an image and Fig. 13B shows the resulting key produced by the image of Fig. 13A using the method of the present invention.
It shall be 26 understood that all of the keys shown in the drawings are pictorial representations of the key 27 vectors themselves, normalized to range from 0 (black) to 255 (white) for ease of visual 28 comparison between keys. Fig. 14A shows another image from the same person as Fig. 13A
29 but taken from a slightly different view, and Fig. l4B shows the resulting key produced by the image of Fig. 14A, showing how similar images (Figs. 13A and 14A) produce similar keys 31 (Figs. l3B and 14B).
32 [24101 Likewise, an example of keys generated from dissimilar vein iunages can be seen by 33 comparison of Fig. 15A with Fig. 16A and of Fig. 15B with Fig. 16B. Figs.
15A and 15B are 34 the same as Figs. 13A and 13B, and are for comparison purposes with the different image of Fig. 16A that produces the different key of Fig. 16B, showing how dissimilar vein patterns 36 generate dissimilar keys. Visible differences in key values can be noted between Figs. 15B

1 and 16B.
2 [2420] While the preferred embodiment uses the Euclidean distance of the points defined 3 by the key as a whole as a comparison metric 54, there are a wide variety of other possibilities, 4 including the Hamming Distance, the Euclidean Squared Distance, the Manhattan Distance, the Pearson Correlation Distance, the Pearson Squared Correlation Distance, the Chebychev 6 Distance, the Spearman Rank Correlation Distance, etc.
7 [24301 The equations for these other distance metrics are well known, and for example, 8 other well-known distance metrics may be used instead of the Euclidean distance.
9 [2440] Well-known equations for some of these other distance metrics that may be used in 10. accordance with the present invention for the distance between the keys key1 and key2 will 11 now be given.
12 [2450] The Euclidean Squared Distance lias the form:
n 2 13 d= Y- (x.--y.) i=1 14 [2460] The Manhattan Distance (or Block Distance) has the form:
d= Y- Ixi-yl) i=1 16 [2470] The Pearson Correlation Distance has the form:
17 d=1-Y

18 where 19 r = Z(x)Z(y) n Z(x)= x - ~x d"
x Y - uy 21 Z(y)= 6 , Y

1 u is the mean, 6 is the standard deviation, and n is the number of values in the sequences x 2 and Y. The Person Correlation, being a normalized distance, is a particularly preferable 3 distance metric for practice of the present invention.
4 [2480] The Pearson Squared Correlation Distance has the form:
d =1-2r 6 with the same definitions as for the Pearson Correlation Distance.
7 [2490] The Chebychev Distance (or Maximum Single-Dimensional Distance) has the 8 form:

9 d=maxilxi-yil [2500] The Spearman Rank Correlation Distance has the form:
~z 2 6 ('ank(x.) - rank(y)) 11 d=1- i=1 nn2-1) 12 [2510] Referring to Figs. 17A, 17B, 18A, 18B, 19A, 19B, 20A, 20B, and Figs.
21-24, the 13 performance of various distance metrics with different images can be seen.
All keys were 14 generated using the preferred embodiment of eight Even-Symmetric Gabor Filters of size 65 pixels x 65 pixels with the following filter parameters:

16 f = 40, 8x =16, Sy =16, 0 ={0 ,22.5 ,45 ,67.5 ,90 ,112.5 ,135 ,157.5 ~

17 These eight filters were independently applied to each respective image (Figs. 17A, 18A, 19A, 18 and 20A) to yield eight respective separate filter outputs (Figs. 17B, 18B, 19B, and 20B). Figs.
19 17A and 18A are similar views from the same hand and Figs. 19A and 20A are from two different hands. Figs. 21-24 list the scores generated by matching different combinations of 21 these images with different distance metrics, and the threshold for each comparison is shown 22 below each respective table. All match scores in Figs. 21-23 are normalized (divided) by the 23 length of the key; it is not necessary to normalize the scores for the Pearson Correlation (Fig.
24 24) because the Pearson Correlation produces normalized scores. The tables of Figs. 21-24 show that the comparison between two images from the same individual is successful while the 1 others do not match.. This also shows that the Pearson Correlation, which is the preferred 2 distance metric, provides the best separation in scores. The reason for this improved 3 performance of the Pearson Correlation distance metric is that it is a normalized metric.
4 Because the input images were not normalized in the pre-processing, the other scoring methods have greater difficulty.
6 [2520] A possible concern with the present invention is its computational complexity. The 7 filters required to perform the feature extraction for an image containing vein patterns are fairly 8 large, and, depending upon the number of filters used (number of orientations and frequencies), 9 the time required to perform the filtering could become problematic. This difficulty can be easily overcome througll the application of additional computing power and by using a 11 hardware, rather than purely software, implementation of the computation steps of the present 12 invention. The independent application of multiple filters to an image can easily be 13 implemented in parallel, and thus, lends itself to parallel processing applications. With the 14 multi-core and multi-processor computing platforms currently available in today's computer technology, sufficient computing resources are not an impediment to practice of the invention.
16 [2530] For more rapid key identifrcation/matching, key subsets ("sub-keys") taken from 17 the filter outputs can be compared separately or even regions within each sub-key can be 18 compared instead of looking at the key as a whole, thereby reducing the computational burden.
19 The results from comparing these key subsets can be used by the matching system independently or recombined to form a single matching score.
21 [2540] Figs. 25-28 show preferred embodiments of how partial key matching may be used 22 in accordance with the present invention in order to reduce the computational burden.
23 [2550] Referring to Figs. 25 and 26, the first key 70, simply for purposes of explanation, is 24 the same as the key shown in Fig. 17B, and the second key 72 is the same as the key shown in Fig. 18B.
26 (2560] Partial key matching is performed as follows. First, a key subset portion 74 of the 27 input key is selected. In the example shown in Figs. 25 and 26, the key subset ("sub-key") 74 28 generated from the seventh filter output (the filter whose angular orientation is 135 degrees) is 29 selected. In this example, sub key 74 contains 400 values, but, in practice, any subset portion of key 70 of a reasonable size can be used. Once selected, this subset portion 74 of the key 70 31 is compared 54 to the corresponding portion 76 of the other keys, e.g., key 72, in the key 32 database by using a distance metric, as heretofore described, and the resulting distance scores 33 are noted. For comparison with the earlier-described fizll-key matching, the example of Figs.
34 25 and 26 uses the preferred distance metric of a Pearson Correlation to compare the -key subsets 74, 76, 80. If the result of the comparison between a pair of sub-keys results in a 36 distance below a pre-detennined first threshold 58 (the example of Figs. 25 and 26 uses a first
-20-1 threshold of 0.5), then the full keys containing those sub-keys are compared, it being thus 2 detennined that a key match is likely and therefore worthy of the computational effort required 3 to do a fall key comparison by using a distance metric. This second full-key comparison 4 (performed as heretofore described and shown, for example, in connection with the discussion of Figs. 21-24) could use the same distance metric as used for the sub-key comparison or a 6 different one, but, in the example of Figs. 25 and 26, the Pearson Correlation is also used for 7 the second (full key) comparison. If the full key comparison distance is below a second 8 tbreshold distance (in this case, 0.5), then a niatch is declared, otherwise the next set of sub-9 keys is compared. Optionally, instead of comparing the full keys immediately, a list of candidates may be formed and either further reduced by additional partial key matching or 11 processed for fall key matching to find the best match. If desired, a list of close matches can 12 be provided to a human operator for fixrther investigation. The use of sub-keys to filter down 13 the set of keys on which to perform full matching provides a substantial savings in 14 computation when employed in large database environments. This form of preliminary comparison can also be used with other final comparison systems. For example, the sub-key 16 matching of the present invention can be used to limit the search space and then a point-based 17 matching algorithm could perform the final comparisons, as explained more fully hereinafter, 18 for greater accuracy.
19 [2570] A major benefit of the present invention, as compared to the prior art, is an approach to indexing large databases using a fixed-length key. For example, to reduce the
21 number of computations required to search the database, the key subset values ("sub-keys")
22 may be pre-indexed to permit the ignoring of keys that are known to have substantially
23 different properties than the current enrollnlent key, thereby avoiding the computation expense
24 of conlparison with these ineligible keys. It should be understood that the pre-indexing of the database to permit rapid ignoring of keys that have substantially different properties than the 26 current enrollment key is equally applicable when the full key is used for the key subset ("sub-27 key"), such that the database indexing is done based on features of the full key rather than on a 28 proper subset of the full key. However, for purposes of illustration, the examples of indexing 29 are shown using key subsets ("sub-keys") that are proper subsets of the full keys rather than the full keys themselves. The examples shown in Figs. 27 and 28 are for a key subset database 31 indexed by decreasing non-zero counts within the key subset cells. As heretofore described, 32 the statistical measure "counts" for each element of the key vector 33 key1= {x1' x2' " ' xfa 1 34 may preferably be a statistical variance, and the key indexing examples shown in Figs. 27 and 1 28 show that this indexing may be based upon the statistical measures of individual key subsets 2 or even parts of those key subsets. For example, if the maximum variance for a key in question 3 is large in the sub-key related to the filter taken at 45 degrees, comparisons with keys that have 4 little to no variance in that sub-key can be ignored. Other measures can also be used to build indexes to help limit computations such as, for example, the total number of key or sub-key 6 values above or below a given thresliold value (the "feature threshold value"), the distance 7 from the zero point, and the current areas of the key containing high or low response values, 8 such that it may be determined whether a group of keys or sub-keys have similar features. It 9 shall be understood that the phrase "having similar features," when used herein to describe keys and sub-keys, means that the keys / sub-keys have a common measurable characteristic, 11 such as the number of non-zero key values, as a quantifier of biometric information, and that 12 the value of the measured "feature" is similar. As an example of the areas of a key containing 13 high or low response values, the upper right quadrant of a key associated with a filter angle of 14 45 degrees might contain no normalized key value larger than 25 (out of a range of 0 .. 255).
This would indicate that there is little to no vein presence at that filter orientation in the upper 16 right quadrant of the image. Providing a series of indices constructed from several key features 17 of the keys makes it possible to quickly focus the key matching on the correct subset of keys 18 for full comparison (i.e. those having similar features), thereby drastically reducing search 19 times for large databases of key values.
[2580] As specific examples of partial key indexing being used to speed up large database 21 matches, the examples of Figs. 27 and 28 will now be explained. In the example of Fig. 27, a 22 subset of the full key is examined and features of this key are used to form an index. For 23 comparison with the approach shown in Figs 25 and 26, the sub-key portion associated with 24 the seventh filter output is once again used in the examples of Figs 27 and 28. By using an index into the key database, key matching can be limited to a subset of the database with keys 26 that have similar features. In the example shown in Figs. 27 and 28, the number of non-zero 27 key values was used as an indexing measurement and the distance metric was chosen as the 28 Pearson Correlation, which is the preferred distance metric, for comparison with the example 29 of Figs. 25 and 26. This is done because the number of non-zero key values is representative of the magnitude of the filter response which, in this example using the seventh filter output, is 31 an indicator of the amount of vein detail that runs at an angle of 135 degrees. The example 32 shows that the input image's sub-key 74 has 56 non-zero values. Because the example 33 database is indexed by the number of non-zero sub-key values, it is only necessary to compare 34 the input sub-key 74 to sub-keys with similar properties. If, as the example shows, the range of keys to inspect is limited to those witli a non-zero key value count that is within 20 of the input 36 sub-key's count of 56 non-zero values (i.e., within the range 36 to 76), only the key subset 76 1 for Key 2 and the key subset 78 for Key 3 need to be considered, and the key subset 80 for 2 Key 4 can be ignored because it is not within the selected count distance of 20 of key subset 74 3 for Key 1. It should be noted that a threshold of zero (i.e., a tally of non-zero statistical 4 measure key value counts) is used in the examples of Figs. 27 and 28 for the,indexing of the key database. However, in practice, any value between the maximum and minimum statistical 6 measure key value counts could be used as, for example, if the database were to be indexed by 7 statistical measure key value counts greater than a threshold of 10 or 20 to require significant 8 filter response before a key cell region is considered meaningful.
9 [2590] While not used in the examples previously discussed, it should be noted that using indexes on multiple sub-keys within the full key or using multiple measures (for example, non-11 zero value count and mean value or values above a given threshold and variance) for the 12 determination of whether keys or sub-keys have similar features is still within the scope of the 13 present invention, and these techniques can be eniployed to further limit the search space. Any 14 applicable statistical measure such as, for example, max value, min value, mean value, median value, variance, standard deviation, etc., may be used on a sub-key either alone or in 16 combination as a metric of similar features to limit the search space in a database. Once the 17 search space has been narrowed by database indexes, the field can either be further narrowed 18 using sub-key matching or the remaining keys can be fully compared for a final result. Both of 19 these options are illustrated in the examples shown in Figs. 25-28.
[2600] It should be noted that any number of sub-keys may be extracted and used to 21 narrow the set of candidate keys for final matching. For example, the group of candidates for a 22 full key match could be first narrowed by comparing the portions of the keys that are generated 23 by the 135 degree orientation filter output (as heretofore explained in connection with the 24 example of Fig. 25). The match scores generated from this set of comparisons are then compared to a threshold value, and keys that score outside the threshold are excluded from 26 further consideration. Depending on the size of the remaining candidate set eligible for full-27 key comparison, it may be beneficial to compare these remaining keys using a different sub-28 key such as, for example, the portion of the key generated from the 45 degree orientation filter 29 output. This secoiid-level sub-key comparison and subsequent score threshold will result in a further reduction of the candidate set eligible for full-key comparison. This process can then 31 be repeated with additional sub-keys until the candidate set is reduced to a reasonable 32 population for full-key matching.
33 [2610] Along similar lines, indexes generated from sub-keys, as previously described, can 34 be combined to better limit the candidate subset of the database used for full-key matching.
For example, an initial first-level index (index one) may be based upon the non-zero element 36 count in a specific sub-key, denoted as sub-key one. By filtering the database to only look at 1 keys with an index value witliin a certain range of the index generated for a specific candidate 2 lcey, the search space may be limited. However, if additional indexes are available within the 3 database (index two, three, etc.), these additional indexes may be used to further limit the 4 candidate search set. These additional indexes may be generated using the same feature from different sub-keys (for exaniple, where index two is the non-zero element count for sub-key 6 two), different features for the same sub-key (for example, where index two is the mean value 7 of sub-key one), or a combination of the two (for example, where index two is the non-zero 8 element count of sub-key two and index three is the mean value of sub-key one, etc.). Each 9 additional index within the database can thus serve to provide an additional limitation or reduction of the search space. `
11 [2620] These search methods can also be combined. For exainple, a number of different 12 index values may first be compared to provide a quick candidate search space limitation. Then 13 several partial key matches may be performed to further limit the candidate space eligible for 14 full-key matching. The remaining candidates are then compared with full-key matching. The order in which the index comparisons and sub-key matches occur may be mixed (as, for 16 example, first an index search, then a sub-key match, then another index search, etc.).
17 However, because the index searches involve single-value comparisons, they tend to be faster 18 and less computationally involved than the sub-key matches, and thus it is usually 19 advantageous to perform the index search coniparisons first, before the sub-key matches (distance calculations) are done.
21 [2630] Another benefit of the present invention is greater accuracy and speed than 22 heretofore possible in the prior art, when the present invention is used in conjunction with a 23 prior art point-based vein matching system. In such an approach, the fixed length key 24 generated using the present invention is used to quickly limit the search space (as heretofore described, by key subset matching and/or partial key indexing) while the point-based 26 information is used to match the remaining eligible candidates. This permits the present 27 invention to be a valuable tool for one-to-many matches to augment existing 1:1 matching 28 approaches, whereby the present invention is used to quickly select eligible key candidates for 29 comparison, and other (slower) prior art approaches are used to make the final biometric matching determination. As compared to prior art approaches that only use point-based 31 information, the present invention adds additional information to the information available to a 32 point-based approach, leading to more accurate matching. When used as a pure index, the 33 number of filters used can also be reduced to lessen computation time.
34 [2640] There are benefits to be gained by this combination of using the fixed-length keys that are produced by the present invention in conjunction with the information provided by a 36 prior art point-based approach. First, the method used by the present invention for narrowing WO 2008/054396 PCT[1JS2006/043091 1 database searches will allow quicker matching by point-based vein approaches on the resulting 2 eligible candidates, and secondly, the use of the two approaches in combination provides 3 additional biometric detail to the matching process. The present invention's fixed-length keys 4 will quickly match/distinguish based on general texture and flow information while the prior art point-based system will contribute data relating to specific critical points within the image.
6 The result is improved accuracy over either method alone, with the speed benefits provided by 7 the present invention's fixed length key matching.

9 [2650] This invention can be used for extracting and matching biometric detail from subcutaneous vein infrared images. The image can be used for biometric identification 11 purposes.
12 [5000] Although the present invention has been described and illustrated with respect to a 13 preferred embodiment and a preferred use therefor, it is not to be so limited since modifications 14 and changes can be made therein which are within the full intended scope of the invention.
- 25 -

Claims (25)

I claim:
1. A method of identifying a person by extracting and matching biometric detail from a subcutaneous vein infrared image of the person, said method comprising the steps of:
(a) filtering said vein image with a first plurality of filters to produce a like first plurality of filtered images;
(b) subdividing each filtered image into a second plurality of regions; each said region having at least one pixel therewithin, each said pixel having an intensity;
(c) for each said region, forming a statistical measure of the pixel intensities therewithin;
(d) ordering said statistical measures of said regions to define an enrollment key;
(e) comparing said enrollment key to a stored verification key to identify said person by calculating a distance between said enrollment key and said stored verification key and comparing said calculated distance to a threshold distance.
2. The method as recited in claim 1, said method further comprising, prior to filtering said vein image with said first plurality of filters, preprocessing said image to remove artifacts.
3. The method as recited in claim 2, said method further comprising the step of identifying a region of interest of said vein image.
4. The method as recited in claim 1, said method further comprising the step of identifying a region of interest of said vein image.
5. The method as recited in any one of the preceding claims, in which at least one said statistical measure is a statistical variance.
6. The method as recited in any one of the preceding claims, in which at least one said statistical measure is selected from the group consisting of a statistical variance, a standard deviation, a mean, and an absolute average deviation.
7. The method as recited in any one of the preceding claims, in which said statistical measure comprises a combination of a first measure and a second measure, both selected from the group consisting of a statistical variance, a standard deviation, a mean, an absolute average deviation, a max value, a min value, a max absolute value, and a median value.
8. The method as recited in any one of the preceding claims, in which said plurality of filters are Even Symmetric Gabor Filters having differing orientation angles.
9. The method as recited in any one of the preceding claims, in which said plurality of filters are Even Symmetric Gabor Filters having differing spatial frequencies.
10. The method as recited in any one of the preceding claims, in which said plurality of filters are selected from the group consisting of:
(a) Even Symmetric Gabor Filters having differing orientation angles;
(b) Even Symmetric Gabor Filters having differing spatial frequencies;
(c) Complex Gabor Filters;
(d) Log Gabor Filters;
(e) Oriented Gaussian filters; and (f) Adapted Wavelets.
11. The method as recited in any one of the preceding claims, in which said distance is a Euclidean distance,
12. The method as recited in any one of the preceding claims, in which said distance is selected from the group consisting of:
(a) a Euclidean Distance;
(b) a Hamming Distance;
(c) a Euclidean Squared Distance;
(d) a Manhattan Distance;
(e) a Pearson Correlation Distance;
(f) a Pearson Squared Correlation Distance;
(g) a Chebychev Distance; and (h) a Spearman Rank Correlation Distance.
13. A method of identifying a person by extracting and matching biometric detail from a subcutaneous vein infrared image of the person, said method comprising the steps of:
(a) filtering said vein image with a first plurality of filters to produce a like first plurality of filtered images, said filters being Even Symmetric Gabor Filters having differing orientation angles;
(b) subdividing each filtered image into a second plurality of regions; each said region having at least one pixel therewithin, each said pixel having an intensity;

(c) for each said region, forming a statistical measure of the pixel intensities therewithin, said statistical measure being a statistical variance;
(d) ordering said statistical measures of said regions to define an enrollment key;
(e) comparing said enrollment key to a stored verification key to identify said person by calculating a Euclidean distance between said enrollment key and said stored verification key and comparing said calculated distance to a threshold distance.
14. An apparatus for identifying a person, said apparatus comprising:
(a) means for capturing a subcutaneous vein infrared image of the person;
(b) a first plurality of filters applied to said vein image to produce a like first plurality of filtered images;
(c) means for subdividing each filtered image into a second plurality of regions;
each said region having at least one pixel therewithin, each said pixel having an intensity;
(d) means for forming a statistical measure for each region of the pixel intensities therewithin;
(e) means for identifying said person by comparing a first ordering of said statistical measures for each region to a stored second ordering of statistical measures by calculating a distance between said ordering and said stored second ordering and comparing said calculated distance to a threshold distance.
15. The apparatus as recited in claim 14, said apparatus further comprising, prior to said first plurality of filters, means for preprocessing said image to remove artifacts.
16. The apparatus as recited in claim 15, said apparatus further comprising means for identifying a region of interest of said vein image,
17. The apparatus as recited in claim 14, said apparatus further comprising means for identifying a region of interest of said vein image.
18. The apparatus as recited in any one of claims 14 to 17, in which at least one said statistical measure is a statistical variance.
19. The apparatus as recited in any one of claims 14 to 18, in which at least one said statistical measure is selected from the group consisting of a statistical variance, a standard deviation, a mean, and an absolute average deviation.
20. The apparatus as recited in any one of claims 14 to 19, in which said statistical measure comprises a combination of a first measure and a second measure, both selected from the group consisting of a statistical variance, a standard deviation, a mean, an absolute average deviation, a max, value, a min value, a max absolute value, and a median value.
21. The apparatus as recited in any one of claims 14 to 20, in which said plurality of filters are Even Symmetric Gabor Filters having differing orientation angles.
22. The apparatus as recited in any one of claims 14 to 21, in which said plurality of filters are Even Symmetric Gabor Filters having differing spatial frequencies.
23. The apparatus as recited in any one of claims 14 to 22, in which said plurality of filters are selected from the group consisting of:
(a) Even Symmetric Gabor Filters having differing orientation angles;
(b) Even Symmetric Gabor Filters having differing spatial frequencies;
(c) Complex Gabor Filters;
(d) Log Gabor Filters;
(e) Oriented Gaussian filters; and (f) Adapted Wavelets.
24. The apparatus as recited in any one of claims 14 to 23, in which said distance is a Euclidean distance.
25. The apparatus as recited in any one of claims 14 to 24, in which said distance is selected from the group consisting of:
(a) a Euclidean Distance;
(b) a Hamming Distance;
(c) a Euclidean Squared Distance;
(d) a Manhattan Distance;
(e) a Pearson Correlation Distance;
(f) a Pearson Squared Correlation Distance;
(g) a Chebychev Distance; and (h) a Spearman Rank Correlation Distance.
CA002671561A 2006-11-03 2006-11-03 Method and apparatus for extraction and matching of biometric detail Abandoned CA2671561A1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2006/043091 WO2008054396A1 (en) 2006-11-03 2006-11-03 Method and apparatus for extraction and matching of biometric detail

Publications (1)

Publication Number Publication Date
CA2671561A1 true CA2671561A1 (en) 2008-05-08

Family

ID=39344577

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002671561A Abandoned CA2671561A1 (en) 2006-11-03 2006-11-03 Method and apparatus for extraction and matching of biometric detail

Country Status (8)

Country Link
EP (1) EP2092460A1 (en)
JP (1) JP2010509672A (en)
KR (1) KR20090087895A (en)
CN (1) CN101595493A (en)
AU (1) AU2006350242A1 (en)
CA (1) CA2671561A1 (en)
MX (1) MX2009004749A (en)
WO (1) WO2008054396A1 (en)

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009029757A1 (en) 2007-09-01 2009-03-05 Global Rainmakers, Inc. System and method for iris data acquisition for biometric identification
US9036871B2 (en) 2007-09-01 2015-05-19 Eyelock, Inc. Mobility identity platform
US9002073B2 (en) 2007-09-01 2015-04-07 Eyelock, Inc. Mobile identity platform
US8212870B2 (en) 2007-09-01 2012-07-03 Hanna Keith J Mirror system and method for acquiring biometric data
US9117119B2 (en) 2007-09-01 2015-08-25 Eyelock, Inc. Mobile identity platform
WO2010032126A2 (en) 2008-09-22 2010-03-25 Kranthi Kiran Pulluru A vein pattern recognition based biometric system and methods thereof
US8687856B2 (en) 2008-11-26 2014-04-01 Bioptigen, Inc. Methods, systems and computer program products for biometric identification by tissue imaging using optical coherence tomography (OCT)
US8787623B2 (en) 2008-11-26 2014-07-22 Bioptigen, Inc. Methods, systems and computer program products for diagnosing conditions using unique codes generated from a multidimensional image of a sample
US8631053B2 (en) * 2009-08-31 2014-01-14 Mitsubishi Electric Research Laboratories, Inc. Method for securely determining Manhattan distances
SG190730A1 (en) 2010-12-09 2013-07-31 Univ Nanyang Tech Method and an apparatus for determining vein patterns from a colour image
WO2012112788A2 (en) * 2011-02-17 2012-08-23 Eyelock Inc. Efficient method and system for the acquisition of scene imagery and iris imagery using a single sensor
US8787625B2 (en) 2011-04-27 2014-07-22 Los Angeles Biomedical Research Institute At Harbor-Ucla Medical Center Use of relatively permanent pigmented or vascular skin mark patterns in images for personal identification
KR101141312B1 (en) * 2011-05-16 2012-05-04 동국대학교 산학협력단 Medical image processing method for blood vessel based on image fusion method
CN102890772A (en) * 2011-07-21 2013-01-23 常熟安智生物识别技术有限公司 Palm vein recognition technical scheme
CN103136522A (en) * 2011-11-28 2013-06-05 常熟安智生物识别技术有限公司 Finger vein identification technical scheme
KR101352769B1 (en) * 2012-05-09 2014-01-22 서강대학교산학협력단 Method and apparatus of differentiating between a background and a region of interest
KR101413853B1 (en) * 2012-06-20 2014-07-01 고려대학교 산학협력단 Method and apparatus for measuring physiological signal usuing infrared image
WO2015145591A1 (en) * 2014-03-25 2015-10-01 富士通フロンテック株式会社 Biometric authentication device, biometric authentication method, and program
DE102014017004A1 (en) 2014-11-12 2016-05-12 Frank Nitschke Method for biometric personal monitoring of hand disinfection frequency
CN108021912B (en) * 2015-10-19 2021-06-29 Oppo广东移动通信有限公司 Fingerprint identification method and device
JP6776559B2 (en) 2016-03-03 2020-10-28 富士通株式会社 Bioimage processing device, bioimage processing method and bioimage processing program
JP6658188B2 (en) * 2016-03-24 2020-03-04 富士通株式会社 Image processing apparatus, image processing method, and image processing program
CN109063541B (en) * 2017-06-08 2021-11-12 墨奇公司 System and method for fingerprint identification
WO2019070240A1 (en) * 2017-10-03 2019-04-11 Visa International Service Association System, method, and computer program product for authenticating identification documents
CN108090336B (en) * 2017-12-19 2021-06-11 西安易朴通讯技术有限公司 Unlocking method applied to electronic equipment and electronic equipment
JP7230396B2 (en) * 2018-09-25 2023-03-01 富士フイルムビジネスイノベーション株式会社 Image processing program, image processing device, and program
KR102145041B1 (en) * 2018-10-30 2020-08-14 박지원 Vascularization Recognition System for Fish
CN111199527B (en) * 2020-01-04 2021-02-02 圣点世纪科技股份有限公司 Finger vein image noise detection method based on multi-direction self-adaptive threshold

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5291560A (en) * 1991-07-15 1994-03-01 Iri Scan Incorporated Biometric personal identification system based on iris analysis
KR100259475B1 (en) * 1997-04-14 2000-06-15 최환수 Method for the identification of individuals using the pattern of blood vessels
JP3586431B2 (en) * 2001-02-28 2004-11-10 松下電器産業株式会社 Personal authentication method and device

Also Published As

Publication number Publication date
JP2010509672A (en) 2010-03-25
MX2009004749A (en) 2009-09-28
CN101595493A (en) 2009-12-02
WO2008054396A1 (en) 2008-05-08
AU2006350242A1 (en) 2008-05-08
EP2092460A1 (en) 2009-08-26
KR20090087895A (en) 2009-08-18

Similar Documents

Publication Publication Date Title
US20080298642A1 (en) Method and apparatus for extraction and matching of biometric detail
CA2671561A1 (en) Method and apparatus for extraction and matching of biometric detail
Syarif et al. Enhanced maximum curvature descriptors for finger vein verification
JP5107045B2 (en) Method for identifying a pixel representing an iris in an image acquired for the eye
TWI434220B (en) A method for recognizing the identity of user by palm vein biometric
JP2009523265A (en) Method for extracting iris features in an image
Chen et al. Iris recognition based on bidimensional empirical mode decomposition and fractal dimension
Ng et al. A review of iris recognition algorithms
JP2007188504A (en) Method for filtering pixel intensity in image
Chuang Vein recognition based on minutiae features in the dorsal venous network of the hand
Donida Labati et al. A scheme for fingerphoto recognition in smartphones
EP3180736A1 (en) A method of detecting a falsified presentation to a vascular recognition system
Kumar et al. Biometric authentication based on infrared thermal hand vein patterns
Rajan et al. A novel finger vein feature extraction technique for authentication
Sathish et al. Multi-algorithmic iris recognition
Avey et al. An FPGA-based hardware accelerator for iris segmentation
BENzIANE et al. Biometric technology based on hand vein
Chopra et al. Finger print and finger vein recognition using repeated line tracking and minutiae
Linsangan et al. Comparing local invariant algorithms for dorsal hand vein recognition system
Sharma et al. Iris Recognition-An Effective Human Identification
Poornima et al. Versatile and economical acquisition setup for dorsa palm vein authentication
Yuan Biometric verification using hand vein-patterns
Kovac et al. Multimodal biometric system based on fingerprint and finger vein pattern
Gandhi et al. Sift algorithm for iris feature extraction
Chen et al. Lightweight CNN and Image Enhancement Using in Palm Vein Recognition

Legal Events

Date Code Title Description
FZDE Discontinued