WO2007103698A2 - segmentation radiale invariante de l'iris - Google Patents

segmentation radiale invariante de l'iris Download PDF

Info

Publication number
WO2007103698A2
WO2007103698A2 PCT/US2007/063019 US2007063019W WO2007103698A2 WO 2007103698 A2 WO2007103698 A2 WO 2007103698A2 US 2007063019 W US2007063019 W US 2007063019W WO 2007103698 A2 WO2007103698 A2 WO 2007103698A2
Authority
WO
WIPO (PCT)
Prior art keywords
peaks
iris
pupil
peak
subject
Prior art date
Application number
PCT/US2007/063019
Other languages
English (en)
Other versions
WO2007103698A3 (fr
Inventor
Rida M. Hamza
Original Assignee
Honeywell International Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US11/372,854 external-priority patent/US8442276B2/en
Priority claimed from US11/382,373 external-priority patent/US8064647B2/en
Application filed by Honeywell International Inc. filed Critical Honeywell International Inc.
Priority to JP2008558461A priority Critical patent/JP4805359B2/ja
Priority to GB0815933A priority patent/GB2450027B/en
Priority to KR1020087022043A priority patent/KR101423153B1/ko
Publication of WO2007103698A2 publication Critical patent/WO2007103698A2/fr
Publication of WO2007103698A3 publication Critical patent/WO2007103698A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/117Identification of persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Definitions

  • the invention is directed towards biomethc recognition, specifically to an improved approach to radial iris segmentation.
  • Biometrics is the study of automated methods for uniquely recognizing humans based upon one or more intrinsic physical or behavioral traits.
  • biometric authentications refer to technologies that measure and analyze human physical characteristics for authentication purposes. Examples of physical characteristics include fingerprints, eye retinas and irises, facial patterns and hand measurements.
  • a leading concern of existing biometric systems is that individual features that identify humans from others can be easily missed due to the lack of accurate acquisition of the biometric data, or due to the deviation of operational conditions. Iris recognition has been seen as a low error, high success method of retrieving biometric data. However, iris scanning and image processing has been costly and time consuming. Fingerprinting, facial patterns and hand measurements have afforded cheaper, quicker solutions. [0006] During the past few years, iris recognition has matured sufficiently to allow it to compete economically with other biomethc methods. However, inconsistency of acquisition conditions of iris images has led to rejecting valid subjects or validating imposters, especially when the scan is done under uncontrolled environmental conditions.
  • iris recognition has proven to be very effective. This is true because iris recognition systems rely on more distinct features than other biomethc techniques such as facial patterns and hand measurements and therefore provides a reliable solution by offering a much more discriminating biomethc data set.
  • Iris segmentation is the process of locating and isolating the iris from the other parts of the eye. Iris segmentation is essential to the system's use. Computing iris features requires a high quality segmentation process that focuses on the subject's iris and properly extracts its borders. Such an acquisition process is sensitive to the acquisition conditions and has proven to be a very challenging problem. Current systems try to maximize the segmentation accuracy by constraining the operation conditions. Constraints may be placed on the lighting levels, position of the scanned eye, and environmental temperature. These constraints can lead to a more accurate iris acquisition, but are not practical in all real time operations.
  • a new feature extraction technique is presented along with a new encoding scheme resulting in an improved biometric algorithm.
  • This new extraction technique is based on a simplified polar segmentation (POSE).
  • the new encoding scheme utilizes the new extraction technique to extract actual local iris features using a process with low computational load.
  • the encoding scheme does not rely on accurate segmentation of the outer bounds of the iris region, which is essential to prior art techniques. Rather, it relies on the identification of peaks and valleys in the iris (i.e., the noticeable points of change in color intensity in the iris).
  • the encoding scheme does not rely on the exact location of the occurrence of peaks detected in the iris, but rather relies on the magnitude of detected peaks relative to a referenced first peak. Since this algorithm does not rely on the exact location of pattern peaks/valleys, it does not require accurate segmentation of the outer boundary of the iris, which in turn eliminates the need for a normalization process.
  • the overall function of the present invention can be summarized as follows.
  • the iris is preprocessed and then localized using an enhanced segmentation process based on a POSE approach, herein referred to as invariant radial POSE segmentation.
  • all obscurant parts i.e. pupil, eyelid, eyelashes, sclera and other non-essential parts of the eye
  • Lighting correction and contrast improvement are processed to compensate for differences in image lighting and reflective conditions.
  • the captured iris image is unwrapped into several radial segments and each segment is analyzed to generate a one dimensional dataset representing the peak and/or valley data for that segment.
  • the peak and/or valley data is one dimensional in the sense that peaks and/or valleys are ordered in accordance with their position along a straight line directed radially outward from the center of the iris.
  • the iris image is unwrapped into a one-dimensional polar representation of the iris signature, in which the data for only a single peak per radial segment is stored.
  • the magnitude of the outermost peak from the pupil-iris border per segment is stored.
  • the magnitude of the largest peak in the segment is stored.
  • the data for a plurality of peaks and/or valleys is stored per radial segment.
  • each peak and/or valley is recorded as a one bit value indicating its magnitude relative to another peak and/or valley in the segment, such as the immediately preceding peak/valley along the one dimensional direction.
  • the data for all of the radial segments is concatenated into a template representing the data for the entire iris scan. That template can be compared to stored templates to find a match.
  • Figure 1 illustrates a scanned iris image based on existing techniques.
  • Figure 2a illustrates a scanned iris image utilizing the principles of the present invention.
  • Figure 2b illustrates the scanned iris image of figure 2a mapped into a one dimensional iris map.
  • Figure 3 illustrates a flow chart showing one embodiment of the present invention.
  • Figure 4a illustrates a mapping of the iris segmentation process according to the principles of the present invention.
  • Figure 4b illustrates an enhanced mapping of the iris scan according to principles of the present invention.
  • Figure 5a illustrates a first encoding scheme according to principles of the present invention.
  • Figure 5b illustrates a second encoding scheme according to principles of the present invention.
  • a leading concern of existing biomethc systems is that individual features which identify humans from others can be easily missed due to the lack of accurate data acquisition or due to deviations in operational conditions.
  • iris recognition has matured to a point that allows it to compete with more common biometric means, such as fingerprinting.
  • biometric means such as fingerprinting.
  • inconsistencies in acquisition conditions of iris images often leads to rejecting valid subjects or validating imposters, especially under uncontrolled operational environments, such as environments where the lighting is not closely controlled.
  • iris recognition has proven to be very effective. This is so because iris recognition systems rely on more distinct features than other common biometric means, providing a reliable solution by offering a more discriminating biometric.
  • Fig. 1 shows a scanned eye image with the borders identified according to conventional prior art segmentation techniques.
  • iris 105 is defined by outer iris border 1 10.
  • outer iris border 1 10 is obstructed by the eyelid at 107 and a true border cannot be determined.
  • the system must estimate the missing portion of the outer iris border 1 10.
  • Computing iris features requires a high-quality segmentation process that focuses on the subject's iris and properly extracts its borders. Such a process is sensitive to the acquisition conditions and has proven to be a challenging problem (especially for uncooperative subjects that are captured at a distance). By constraining operational conditions, such as carefully controlling lighting and the position of a subject's eye, current systems attempt to resolve segmentation problems, but these approaches are not always practical.
  • FIG. 2A shows a similarly scanned eye image as in figure 1.
  • POSE simplified polar segmentation
  • This enhanced POSE technique or invariant radial POSE, focuses on detecting the peaks and valleys of the iris, i.e., the significant discontinuities in color intensity between the pupil and the sclera within defined radial segments of the iris.
  • a peak is a point where color intensity on either side of that point (in the selected direction) is less than the color intensity at that point (and the discontinuity exceeds some predetermined threshold so as to prevent every little discontinuity from being registered as a recorded peak).
  • a valley is a point where color intensity on either side of that point in the selected direction is greater than the color intensity at that point (with the same qualifications).
  • This technique is referred to as being one dimensional because, rather than collecting two dimensional image data per radial segment as in the prior art, the collected iris data per radial segment has only one signal dimension. This process eliminates the need to: estimate an obstructed outer boundary of the iris; segment the outer bound of the iris; and calculate exact parameters of circles, ellipses, or any other shapes needed to estimate a missing portion of the outer boundary. [0027] Iris 205 is scanned utilizing the invariant radial POSE process.
  • the invariant radial POSE process locates and identifies the peaks and valleys present in the scanned iris and creates an iris map.
  • Figure 2A helps illustrate one form of iris map that can represent the peak and/or valley data in an iris scan.
  • the data for only one peak is stored per radial segment.
  • To construct an iris map in accordance with this embodiment of the invention first the iris is segmented into a set number of radial segments, for example 200 segments. Thus, each segment represents a 1 .8 degree slice of a complete 360 degree scan of the iris. After each of the 200 segments is analyzed, the data for one characteristic peak in the segment is stored.
  • the peak selected for representation in each radial segment is the peak 210 that is outermost from the pupil-iris border.
  • the selected peak may be the greatest peak (other than the peak at the pupil-iris border), the sharpest peak, or the innermost peak. If the criterion is the outermost peak, it is preferable to use the outermost peak within a predefined distance of the pupil-iris border since, as one gets closer to the iris- sclera border, the peaks and valleys tend to become less distinct and, therefore, less reliable as a criterion for identifying subjects.
  • the data corresponding to valleys instead of peaks may be recorded.
  • the recorded data need not necessarily even be a peak or valley, but may be any other readily identifiable color or contrast characteristic.
  • the distance from the center of the pupil of whichever peak or valley (or other characteristic) is selected for representation is stored.
  • the radial distance is reported as a relative value relative to the radial distance of a reference peak from the center of the pupil. In this manner, it does not require a normalization procedure of the iris scan in order to compensate for changes to the iris due to environmental conditions (e.g., pupil dilation, ambient light).
  • the reference peak is the peak at the pupil-iris border in that segment, which usually, if not always, will be the greatest peak in the segment.
  • Figure 2B shows the scanned iris mapped into a one dimensional iris map.
  • the iris is segmented into a predetermined number of radial segments, for example 200 segments, each segment representing 1.8 degrees of a complete 360 degree scan of the iris.
  • a reference peak is selected in each segment, the reference peak being the peak at the pupil-iris border in the analyzed radial segment (which usually, if not always, will be the greatest peak in the segment).
  • the iris is unwrapped to create the graph shown in Figure 2B, with each point 215 representing the aforementioned relative radial distance of the corresponding peak for each of the radial segments.
  • the conversion of the peaks and valleys data into the graph shown in Figure 2B may be an "unwrapping" of the iris about the normal of the pupil-iris border (i.e., perpendicular to the border).
  • the pupil-iris border is essentially a circular border.
  • border is a string and unwrapping that string into a straight line with the reference peaks from each radial segment represent as a discrete point 215, as shown in Figure 2B.
  • this one dimensional iris representation will be unchanged with respect to the relative location of the reference peaks in each angular segment, but may result in the shifting of the entire curve 215 upwards or downwards. While pupil dilation and other factors may affect the absolute locations of the peaks or valleys (i.e., their actual distances from the pupil border), they will not affect the relative locations of the peaks and valleys in the iris relative to the reference peaks (or valleys).
  • Figure 4A helps illustrate the formation of an alternative and more robust representation of the scanned iris image data in which the data for multiple peaks, rather than just one characteristic peak, is recorded per radial segment.
  • the center of the pupil is indicated by cross 405.
  • the horizontal or x- axis represents the radial distance from the pupil-iris border (i.e., perpendicular to the pupil-iris border), and the vertical or y-axis represents the derivative of the color intensity.
  • the peak at the pupil-iris border is indicated at 41 1 . All other peaks and valleys in the segment are represented graphically relative to the reference peak so that no data normalization will be necessary.
  • each radial segment usually will be several pixels wide at the pupil border 410, and become wider as the distance from the pupil-iris border increases. Therefore, in order to generate the one dimensional data represented in the graph of Fig. 4A, the color intensity derivative data represented by the y- axis should be averaged or interpolated over the width of the segment. This representation of the interpolated data is shown in line 415, in which each significant data peak is marked by reference numeral 420.
  • Figure 4B helps illustrate an even further embodiment.
  • Graph 425 shows a graphical representation of the iris, such as the one illustrated in Figure 4A.
  • each individual peak is isolated and recorded with respect to the reference peak.
  • enhancement curve 430 is removed from the one dimensional iris representation.
  • Enhancement curve 430 is the component of the graph that can be removed without affecting the magnitude of each peak relative to the next peak resulting in a normalized data set focusing solely on the magnitudes of the relative peaks.
  • the enhancement curve can be calculated as the approximate component (DC component) of the decomposition of the graph of Figure 4A.
  • DC component the approximate component
  • graph 435 results, where each peak is represented as a point 437 on the graph.
  • graph 425 is now normalized based on peak occurrence.
  • the peak data will be encoded very efficiently by encoding each peak relative to an adjacent peak using as few as one or two bits per peak. Accordingly, the removed enhancement curve simplifies the processing while preserving all needed information.
  • Figure 3 illustrates a flow chart showing an embodiment of the present invention.
  • Step 305 a preprocessing step takes place.
  • the preprocessing may be essentially conventional.
  • texture enhancements are performed on the scanned image. Obscurant parts of the image, such as pupils, eyelids, eyelashes, sclera and other non-essential parts of the eye are dropped out of the analysis.
  • the system preprocesses the image using a local radial texture pattern (LRTP).
  • LRTP local radial texture pattern
  • the image is preprocessed using local radial texture pattern similar to, but revised over that proposed in Y. Du, R. Ives, D. Etter, T. Welch, C-I. Chang, "A one-dimensional approach for iris identification", EE Dept, US Naval Academy, Annapolis, MD, 2004.
  • I (x, y) the color intensity of the pixel located at the two dimensional coordinate x, y;
  • the curve that determines the neighboring points of the pixel x, y;
  • Step 310 the Invariant Radial POSE segmentation process is performed. This approach differs from traditional techniques as it does not require iris segmentation at the outer border of the iris, i.e., the ihs-sclera border.
  • the process first roughly determines the iris center in the original image, and then refines the center estimate and extracts the edges of the pupil.
  • a technique for locating the center of the pupil is disclosed in aforementioned U.S. Patent Application No. 1 1/043,366, incorporated by reference and need not be discussed further.
  • Techniques for locating the pupil- iris border also are disclosed in the aforementioned patent application and need not be discussed further.
  • the segmentation process begins.
  • the radial scan of the iris is done in radial segments, e.g., 200 segments of 1.8 degrees each.
  • Step 315 the actual feature extraction occurs based on the segmented image obtained in Step 310.
  • the feature extraction process can be performed, for example, in accordance with any of the three embodiments previously described in connection with Figures 2A and B, 4A, and 4B, respectively, which detect changes in the graphical representation of the iris while not relying on the absolute location of the changes' occurrence. Particularly, the absolute locations change as a function of the natural dilation and contraction of the human iris when exposed to variations in environmental light conditions. Therefore, the feature extraction process relies on detecting the peak and valley relative variations in magnitude and their relative locations rather than focusing on their absolute magnitudes or locations.
  • a key advantage of this approach is that it does not require a normalization procedure of the iris scan in order to compensate for changes to the iris due to environmental conditions. A normalization procedure of the iris scan is crucial to prior art iris recognition techniques.
  • Step 320 the resulting peak data represented in graph 435 is encoded into an encoded template so that it can later be efficiently compared with stored templates of iris data for known persons.
  • Two encoding alternatives are discussed below in connection with Figures 5A and 5B, respectively. These two are shown only for example and are not meant to limit the scope of the present invention.
  • FIGS 5A and 5B help illustrate the encoding of the peak/valley data set for one radial segment of a scanned iris in accordance with two embodiments of the invention, respectively.
  • each template will comprise a plurality of such data sets, the number of such sets in a template being equal to the number of radial segments. Thus, for instance, if each segment is 1 .8°, each template will comprise 200 such data sets.
  • Figure 5A illustrates a first encoding scheme which focuses on relative peak amplitude versus the amplitude of the immediately previous peak.
  • Figure 5A illustrates encoding of the peak data for a single radial segment and shows a data set for that segment.
  • Each data set comprises IxK bits, where K is a number of peaks per radial segment for which we wish to record data and I is the number of bits used to encode each peak.
  • the bits represent peaks that are farther radially outward from the pupil-iris border (i.e., the x axis in Figures 2B, 4A, and 4B, which represents distance from the pupil-iris border). If the magnitude of a peak is greater than the magnitude of the previous peak in the graph such as graph 435, the bits representing that peak are set to 1 1. Otherwise, the bits are set to a second value, e.g., 00.
  • the second I bits are essentially guaranteed to be 00 since, in this example, the reference peak is essentially guaranteed to have the greatest magnitude in the segment, and will, thus, always be larger than the next peak. Therefore, in this encoding scheme, the first four bits of each data set are irrelevant to and will not be considered during matching since they will always be identical, namely 1 100.
  • the end of the data set is filled with a one or more third bits sets of a third value, e.g., 10 or 01 that will eventually be masked in the matching step 325.
  • the radial segment has more than K peaks, only the K peaks closest to the pupil-iris border are encoded.
  • the sequence representing the peak/valley information for this segment of the iris is 1 1001 1001 1001010.
  • the first two bits represent the magnitude of the reference peak 501 and are always 1 1
  • the second two bits represent the magnitude of the first peak 503 in the segment and is essentially guaranteed to be 00 because it will always be smaller than the reference peak
  • the fifth and sixth bits are 1 1 because the next peak 505 is greater than the preceding peak 503
  • the seventh and eighth bits are 00 because the next peak 507 is less than the immediately preceding peak 507
  • the ninth and tenth bits are 1 1 because the next peak 509 is greater than the preceding peak 507
  • the eleventh and twelfth bits are 00 because the next peak 51 1 is less than the immediately preceding peak 509
  • the last four bits are 1010 corresponding to two sets of unknowns because this segment has only five peaks (and the reference peak is the sixth peak represented in the data set).
  • the sequence representing the peak/valley information for this segment of the iris is 1 10000001 1 101010 since the first two bits represent the magnitude of the reference peak 501 and is always 1 1 , the next two bits represent the magnitude of the first peak in the segment 513 and are OO because it is smaller that the reference peak, the next two bits are OO because the next peak 515 is greater than the preceding peak 513, the next two bits are OO because the next peak 517 is less than the immediately preceding peak 517, the next two bits are 1 1 because the next peak 519 is greater than the preceding peak 517, and last six bits are 101010 because this segment has only five peaks (including the reference peak).
  • FIG. 5B illustrates a second exemplary encoding scheme according to principles of the present invention.
  • This second encoding scheme also is based on a 2-bit quantization of the magnitude of the filtered peaks, but the peak magnitudes are quantized into three magnitude levels, namely, Low (L), High (H), and Medium (M).
  • Low level magnitude L is assigned 2-bit pattern 00
  • High level magnitude H is assigned 2-bit pattern 1 1.
  • the levels are structured in a way that will allow only one bit error-tolerance to move from one adjacent quantization level to the next. Per this constraint, the scheme has two combinations to represent the medium level, i.e.
  • the bits corresponding to unknown peaks may be identified by any reasonable means, such as appending a flag to the end of the data set indicating the number of bits that correspond to unknown peaks.
  • the levels may be encoded with three bit quantization in order to provide additional bit combinations for representing unknown. Even further, only one value, e.g., 10, can be assigned for the Medium level, which will leave the two-bit combination 01 for representing unknowns. The unknown bits will be masked during matching, as discussed below. Likewise, if the number of peaks in a radial segment exceeds the number of peeks needed to fill the data set, then the peaks farthest from the pupil are dropped.
  • Step 325 a template is constructed by concatenating all of the data sets corresponding to all of the radial segments in the iris scan.
  • a template is constructed by concatenating all of the data sets corresponding to all of the radial segments in the iris scan.
  • Step 330 The process determines whether a scanned iris template matches a stored iris template by comparing the similarity between the corresponding bit-templates.
  • a weighted Hamming distance can be used as a metric for recognition to execute the bit-wise comparisons.
  • the comparison algorithm can incorporate a noise mask to mask out the unknown bits so that only significant bits are used in calculating the information measure distance (e.g. Hamming distance).
  • the algorithm reports a value based on the comparison. A higher value reflects fewer similarities in the templates. Therefore, the lowest value is considered to be the best matching score of two templates.
  • a weighting mechanism can be used in connection with the above mentioned matching.
  • the bits representing the peaks closest to the pupillary region (the pupil borders) are the most reliable/distinct data points and may be weighted higher as they represent more accurate data. All unknown bits, whether present in the template to be matched or in the stored templates, are weighted zero in the matching. This may be done using any reasonable technique.
  • the bit positions corresponding to unknown bits of one of the two templates are always filled in with bits that match the corresponding bits of the other template.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Biology (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Collating Specific Patterns (AREA)

Abstract

L'invention concerne une méthode et un produit informatique permettant d'identifier un sujet par analyse biométrique d'un œil. Premièrement, on acquiert une image de l'iris d'un sujet à identifier. On peut effectuer si on le désire une amélioration de la texture de l'image, mais ce n'est pas nécessaire. Ensuite, l'image de l'iris est segmentée radialement en un nombre déterminé de segments angulaires, par exemple 200 segments, chaque segment représentant 1,8° de l'image de l'iris. Après cette segmentation, on analyse chaque segment angulaire, et on détecte les pics et les creux d'intensité de couleur dans le segment angulaire d'iris. Lesdits pics et creux détectés sont mathématiquement transformés en un ensemble de données servant à créer un modèle. Ledit modèle représente l'iris du sujet, scanné et analysé, et il est réalisé à partir de chaque ensemble de données transformé provenant de chacun des segments angulaires. Cette construction effectuée, ce modèle peut être stocké dans une base de données, ou utilisé à des fins de comparaison si le sujet est déjà enregistré dans la base de données.
PCT/US2007/063019 2006-03-03 2007-03-01 segmentation radiale invariante de l'iris WO2007103698A2 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2008558461A JP4805359B2 (ja) 2006-03-03 2007-03-01 不変径方向虹彩セグメント化
GB0815933A GB2450027B (en) 2006-03-03 2007-03-01 Invariant radial iris segmentation
KR1020087022043A KR101423153B1 (ko) 2006-03-03 2007-03-01 불변 방사상 홍채 세그먼트화

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US77877006P 2006-03-03 2006-03-03
US60/778,770 2006-03-03
US11/372,854 US8442276B2 (en) 2006-03-03 2006-03-10 Invariant radial iris segmentation
US11/372,854 2006-03-10
US11/382,373 US8064647B2 (en) 2006-03-03 2006-05-09 System for iris detection tracking and recognition at a distance
US11/382,373 2006-05-09

Publications (2)

Publication Number Publication Date
WO2007103698A2 true WO2007103698A2 (fr) 2007-09-13
WO2007103698A3 WO2007103698A3 (fr) 2007-11-22

Family

ID=38353648

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/063019 WO2007103698A2 (fr) 2006-03-03 2007-03-01 segmentation radiale invariante de l'iris

Country Status (4)

Country Link
JP (1) JP4805359B2 (fr)
KR (1) KR101423153B1 (fr)
GB (1) GB2450027B (fr)
WO (1) WO2007103698A2 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2979727A1 (fr) * 2011-09-06 2013-03-08 Morpho Identification par reconnaissance d'iris

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2468380B (en) * 2009-03-02 2011-05-04 Honeywell Int Inc A feature-based method and system for blur estimation in eye images
US8873810B2 (en) 2009-03-02 2014-10-28 Honeywell International Inc. Feature-based method and system for blur estimation in eye images
US8948467B2 (en) * 2010-08-06 2015-02-03 Honeywell International Inc. Ocular and iris processing system and method
KR101601564B1 (ko) * 2014-12-30 2016-03-09 가톨릭대학교 산학협력단 얼굴의 원형 블록화를 이용한 얼굴 검출 방법 및 그 장치

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000062239A1 (fr) * 1999-04-09 2000-10-19 Iritech Inc. Systeme de reconnaissance de l'iris et methode d'identification d'une personne par reconnaissance iridienne
US20050207614A1 (en) * 2004-03-22 2005-09-22 Microsoft Corporation Iris-based biometric identification

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001195594A (ja) * 1999-04-09 2001-07-19 Iritech Inc 虹彩同定システム及び虹彩認識によって人を同定する方法
WO2004090814A1 (fr) * 2003-04-02 2004-10-21 Matsushita Electric Industrial Co. Ltd. Procede de traitement d'image, processeur d'image, dispositif photographique, unite de sortie d'image, et unite de verification d'iris

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000062239A1 (fr) * 1999-04-09 2000-10-19 Iritech Inc. Systeme de reconnaissance de l'iris et methode d'identification d'une personne par reconnaissance iridienne
US20050207614A1 (en) * 2004-03-22 2005-09-22 Microsoft Corporation Iris-based biometric identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MA L ET AL: "Local intensity variation analysis for iris recognition" PATTERN RECOGNITION, ELSEVIER, KIDLINGTON, GB, vol. 37, no. 6, June 2004 (2004-06), pages 1287-1298, XP004505327 ISSN: 0031-3203 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2979727A1 (fr) * 2011-09-06 2013-03-08 Morpho Identification par reconnaissance d'iris
WO2013034654A1 (fr) * 2011-09-06 2013-03-14 Morpho Identification par reconnaissance d'iris
CN103843009A (zh) * 2011-09-06 2014-06-04 茂福公司 通过虹膜识别进行鉴定
US9183440B2 (en) 2011-09-06 2015-11-10 Morpho Identification by iris recognition

Also Published As

Publication number Publication date
WO2007103698A3 (fr) 2007-11-22
JP4805359B2 (ja) 2011-11-02
GB2450027B (en) 2011-05-18
GB2450027A (en) 2008-12-10
KR101423153B1 (ko) 2014-07-25
JP2009529195A (ja) 2009-08-13
KR20080100256A (ko) 2008-11-14
GB0815933D0 (en) 2008-10-08

Similar Documents

Publication Publication Date Title
US8442276B2 (en) Invariant radial iris segmentation
Chen et al. Iris recognition based on human-interpretable features
Raja Fingerprint recognition using minutia score matching
KR102501209B1 (ko) 홍채 인식에 의해 개체를 식별 및/또는 인증하기 위한 방법
WO2007103698A2 (fr) segmentation radiale invariante de l'iris
Chirchi et al. Feature extraction and pupil detection algorithm used for iris biometric authentication system
Podder et al. An efficient iris segmentation model based on eyelids and eyelashes detection in iris recognition system
Xu et al. Improving the performance of iris recogniton system using eyelids and eyelashes detection and iris image enhancement
CN112926516B (zh) 一种鲁棒的手指静脉图像感兴趣区域提取方法
WO2007097510A1 (fr) Procédé d'identification d'un iris par déformation élastique
Gupta et al. Iris recognition system using biometric template matching technology
Jung et al. Fingerprint classification using the stochastic approach of ridge direction information
Tahir et al. An accurate and fast method for eyelid detection
KR100794361B1 (ko) 홍채 인식 성능 향상을 위한 눈꺼풀 검출과 속눈썹 보간방법
Kulshrestha et al. Finger print recognition: survey of minutiae and gabor filtering approach
Arora et al. Human identification based on iris recognition for distant images
Khan et al. Fast and efficient iris segmentation approach based on morphology and geometry operation
Sahmoud Enhancing iris recognition
George et al. A survey on prominent iris recognition systems
Gavrilescu Shape Variance-Based Feature Extraction for Biometric Fingerprint Analysis
Poonguzhali et al. Iris indexing techniques: A review
US7333640B2 (en) Extraction of minutiae from a fingerprint image
Wibowo et al. Real-time iris recognition system using a proposed method
Sojan et al. Fingerprint Image Enhancement and Extraction of Minutiae and Orientation
Tian et al. A practical iris recognition algorithm

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 0815933

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20070301

WWE Wipo information: entry into national phase

Ref document number: 0815933.7

Country of ref document: GB

WWE Wipo information: entry into national phase

Ref document number: 2008558461

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 1020087022043

Country of ref document: KR

122 Ep: pct application non-entry in european phase

Ref document number: 07757674

Country of ref document: EP

Kind code of ref document: A2