WO2007097510A1 - Deformation-resilient iris recognition methods - Google Patents

Deformation-resilient iris recognition methods Download PDF

Info

Publication number
WO2007097510A1
WO2007097510A1 PCT/KR2006/004630 KR2006004630W WO2007097510A1 WO 2007097510 A1 WO2007097510 A1 WO 2007097510A1 KR 2006004630 W KR2006004630 W KR 2006004630W WO 2007097510 A1 WO2007097510 A1 WO 2007097510A1
Authority
WO
WIPO (PCT)
Prior art keywords
subregions
iris
image information
subregion
registration
Prior art date
Application number
PCT/KR2006/004630
Other languages
French (fr)
Inventor
Daehoon Kim
Nam-Sook Wee
Sung Jin Lee
Song-Hwa Kwon
Hyeong In Choi
Jung Kyo Sohn
Du Seop Jung
Seungmin Paik
Original Assignee
Iritech Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Iritech Inc. filed Critical Iritech Inc.
Publication of WO2007097510A1 publication Critical patent/WO2007097510A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/095Traffic lights
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F21LIGHTING
    • F21VFUNCTIONAL FEATURES OR DETAILS OF LIGHTING DEVICES OR SYSTEMS THEREOF; STRUCTURAL COMBINATIONS OF LIGHTING DEVICES WITH OTHER ARTICLES, NOT OTHERWISE PROVIDED FOR
    • F21V15/00Protecting lighting devices from damage

Definitions

  • the present invention relates, in general, to an iris recognition method, and, more particularly, to a deformation-resilient iris recognition method that is capable of preventing a recognition rate from being reduced due to an error occurring during the setting of a boundary between an iris region and another region (pupil or sclera) or an error occurring because of the deformation of an iris in the iris recognition method.
  • biometric recognition technology or biometrics, for verifying identity using the physical or behavioral characteristics of individuals has attracted attention.
  • Biometrics' chief advantage as an ID is that it is immune to loss, theft, forgetfulness and reproduction, and has high reliability.
  • biometric recognition technology examples include fingerprint recognition, face recognition, retina identification, and iris recognition.
  • Iris recognition is a technology for verifying identity using an iris pattern that exists between a pupil at the center of the eye and a sclera.
  • a human' s iris is the colored portion of the eye having a diameter of about 11 mm, and refers to a portion outside the pupil.
  • the iris is composed of muscles that control the size of the pupil. That is, as shown in Fig. 1, the iris functions as a diaphragm that controls the amount of light that is incident into an eyeball from the outside.
  • such an iris recognition method includes the step of acquiring an image of an eye, including an iris, using a camera, the step of extracting the region of the iris from the image of the eye, the step of searching the extracted iris region for the unique characteristics of an individual, and the matching step of determining the similarity between the characteristics of two compared irises .
  • the image acquisition step is the step of acquiring an image of the eye, including an iris, using an image acquisition device that includes a Charge-Coupled Device (CCD) camera or video camera for acquiring images in real time, an illumination source for acquiring clear iris patterns, and a frame grabber for converting analog images into digital images.
  • Fig. 2 shows an image of the eye and the gray data thereof in three-dimensional (3-D) graphic form.
  • the iris region extraction step is the step of separating the iris region from the image of the eye acquired via the image acquisition device, and the accurate iris region extraction is necessary so as to achieve consistent iris characteristic extraction. In general, an iris region is extracted through the determination of the center of a pupil and an iris and the distances from the center.
  • a method using a circular boundary detector J. G. Daugman method
  • a method using Hough transform R. P. Wildes method
  • a method using a template are currently being used. These methods are based on the assumption that the shape of an iris boundary is circular and the fact that a pupil is darker than another region.
  • a method of converting an iris image based on the center of a pupil or an iris and the distances from the center to the points of the iris image, determined using the method, into the polar coordinate system is widely being used.
  • the iris characteristic extraction step is the step of dividing an iris region, converted into the polar coordinate system, into unit cells and extracting the characteristics of patterns from the respective cells.
  • the extracted characteristics of each pattern are represented using values that reflect variations in the gray value of the iris pattern.
  • the effective iris characteristics may be encoded in specific form, which is referred to as iris code generation.
  • a wavelet transform analysis method including Gabor transform is chiefly used as a method for such iris characteristic extraction.
  • the matching step is the step of comparing previously registered iris characteristics or a previously registered iris code with iris characteristics or an iris code, extracted from an input iris image, so as to verify identity.
  • a person under consideration is determined to be the person himself (accept) or not to be the person himself (reject) depending on the similarity between compared characteristics or compared codes.
  • a generally used method is a method of measuring Hamming distances.
  • a set of bit values assigned for respective dimensions are compared with another set of bit values, a new set of bit values is formed through conversion into 0 in the case of coincidence and into 1 in the case of non-coincidence, and then the sum of the resulting bit values is divided by the number of the bit values . Accordingly, in the case where the input data is identical to the registered data, the results of comparison of all bits are 0. Accordingly, a final resulting value close to 0 indicates that the input data is an authentic person's data.
  • the appropriate threshold value can be a boundary that distinguishes an authentic person's data from non-authentic persons' data. Accordingly, a person under consideration is determined to be an authentic person if a similarity value between two compared irises is greater than the threshold value, and the person under consideration is determined not to be an authentic person if the similarity value is less than the threshold value.
  • a representative index for evaluating biometric recognition techniques is a recognition rate.
  • iris recognition an error occurs in the case where the iris image of a person different from a registered user is determined to be the registered user' s iris image although the iris image of the different person is input, and in the case where the iris image of a person identical to a registered user is determined to be another person' s image although the iris image of the identical person is input.
  • the recognition rate increases in inverse proportion to the frequency of such former cases and the frequency of such latter cases, that is, a False Acceptance Rate (hereinafter referred to as a X FAR' ) and a False Rejection Rate (hereinafter referred to as a ⁇ FRR' ) .
  • the FAR and the FRR may be illustrated using two distributions, as shown in Fig. 3.
  • the left distribution is a distribution that is obtained when the similarity between two pieces of iris data of a registered person are compared with each other (a function indicating this distribution is referred to as ⁇ F(x)'), and a right distribution is a distribution that is obtained when the iris data of a registered person is compared with another person' s iris data (a function indicating this distribution is referred to as ⁇ G(x)').
  • a value that is obtained by calculating an area between G(x) and the X axis in a range below the threshold value is a FAR
  • a value that is obtained by calculating an area between F(x) and the X axis in a range above the threshold value is a FRR.
  • the FAR and the FRR vary with the threshold value, and may be differently set depending on the application field.
  • the left distribution of Fig. 3, that is, F(x), is a distribution that has somewhat high variance due to various factors that will be described below. High variance causes the FAR and the FRR to increase, thus decreasing the recognition rate of iris recognition.
  • a factor having the potential to increase the FAR and the FRR in the iris recognition may be the deformation of an iris itself, and other various factors exist at the various steps of the iris recognition method.
  • a human' s iris has a characteristic shape that moves differently for each person while contracting or expanding according to the size of a pupil, which is a cause of the above-described error. That is, the deformation of an iris varies with the person, and even the same person's iris does not have uniform deformation.
  • a representative one of the errors that may occur at various steps of the iris recognition method is an error that occurs during the setting of an iris region at the step of extracting the iris region. That is, an iris region does not exist at the same location in an acquired image due to variation in surrounding illumination, variation in the distance between a camera and a face, and the misalignment between the optical axis of a camera and the eye, and the same characteristics are not extracted due to occlusion caused by an eyebrow or eyelid, thus causing an erroneous recognition rate, that is, the FAR or the FRR, to increase.
  • a generally used method is a method of measuring Hamming distances. That is, to verify two characteristic vectors formed of binary vectors, a set of bit values assigned for respective dimensions are compared with another set of bit values, each bit value is converted into 0 in the case of coincidence and into 1 in the case of non- coincidence, and then the resulting value is divided by the number of total dimensions . Accordingly, in the case where the input data is identical to the registered data, the results of comparison of all bits are 0. Accordingly, results close to 0 indicate an authentic person's data. For non-authentic persons' data, if an appropriate threshold value is set, the appropriate threshold value can be a boundary that distinguishes an authentic person' s data from non-authentic persons' data. Accordingly, a person under consideration is determined to be an authentic person if a similarity value between two compared irises is greater than the threshold value, and the person under consideration is determined not to be an authentic person if the similarity value is less than the threshold value.
  • a representative index for evaluating biometric recognition techniques is a recognition rate.
  • iris recognition an error occurs in the case where the iris image of a person different from a registered user is determined to be the registered user' s iris image although the iris image of the different person is input, and in the case where the iris image of a person identical to a registered user is determined to be another person's image although the iris image of the identical person is input.
  • the recognition rate increases in inverse proportion to the frequency of such former cases and the frequency of such latter cases, that is, a False Acceptance Rate (hereinafter referred to as a ⁇ FAR' ) and a False Rejection Rate (hereinafter referred to as a ⁇ FRR' ) .
  • the FAR and the FRR may be illustrated using two distributions, as shown in Fig. 3.
  • the left distribution is a distribution that is obtained when the similarity between two pieces of iris data of a registered person are compared with each other (a function indicating this distribution is referred to as ⁇ F(X)'), and a right distribution is a distribution that is obtained when the iris data of a registered person is compared with another person' s iris data (a function indicating this distribution is referred to as ⁇ G(x)') .
  • a value that is obtained by calculating an area between G(x) and the X axis in a range below the threshold value is a FAR
  • a value that is obtained by calculating an area between F(x) and the X axis in a range above the threshold value is a FRR.
  • the FAR and the FRR vary with the threshold value, and may be differently set depending on the application field.
  • the left distribution of Fig. 3, that is, F(x), is a distribution that has somewhat high variance due to various factors that will be described below. High variance causes the FAR and the FRR to increase, thus decreasing the recognition rate of iris recognition.
  • a factor having the potential to increase the FAR and the FRR in the iris recognition may be the deformation of an iris itself, and other various factors exist at the various steps of the iris recognition method.
  • a human's iris has a characteristic shape that moves differently for each person while contracting or expanding according to the size of a pupil, which is a cause of the above-described error. That is, the deformation of an iris varies with the person, and even the same person' s iris does not have uniform deformation.
  • a representative one of the errors that may occur at various steps of the iris recognition method is an error that occurs during the setting of an iris region at the step of extracting the iris region. That is, an iris region does not exist at the same location in an acquired image due to variation in surrounding illumination, variation in the distance between a camera and a face, and the misalignment between the optical axis of a camera and the eye, and the same characteristics are not extracted due to occlusion caused by an eyebrow or eyelid, thus causing an erroneous recognition rate, that is, the FAR or the FRR, to increase .
  • an object of the present invention is to provide a deformation-resilient iris recognition method that is capable of preventing a recognition rate from being reduced due to an error occurring during the setting of a boundary between an iris region and another region (pupil or sclera) or an error occurring because of the deformation of an iris in the iris recognition method.
  • Another object of the present invention is to provide a deformation-resilient iris recognition method that is provided with an overlap region division method so as to prevent the loss of image information existing in the boundaries between subregions when an iris region is divided into the subregions using a conventional region division method, therefore additional information can be obtained using the overlap region division method, thereby providing improved reliability.
  • a further object of the present invention is to provide a deformation-resilient iris recognition method that is capable of preventing a recognition rate from being reduced due to the distortion of information that occurs when occlusion exists in an iris region.
  • the present invention provides a deformation-resilient iris recognition method including the step of acquiring an image of an eye, including an iris, using a camera so as to acquire a registration or authentication image for verification of identity, the step of extracting a region of the iris from the image of the eye, the step of dividing the extracted iris region into a plurality of subregions and searching for iris characteristics of respective subregions, and the matching step of determining similarity between registration iris characteristics and authentication iris characteristics, wherein the matching step uses a vibration method of comparing a subregion of registration image information and a registration vibration region, including the subregion and an expanded region of the subregion, with an authentication vibration region, including a subregion of authentication image information, located at a location corresponding to a location of the subregion of the registration image information, and an expanded region of the latter subregion.
  • the matching step is the step of comparing the subregion of the registration image information with a vibration region of a subregion of the authentication image information that is positioned at a location corresponding to a location of the former subregion.
  • the matching step is the step of comparing a subregion of the authentication image information with a vibration region of a subregion of the registration image information that is positioned at a location corresponding to a location of the former subregion.
  • the matching step is the step of comparing a subregion of the registration image information with a vibration region of a subregion of the authentication image information that is positioned at a location corresponding to a location of the former subregion, and comparing a subregion of the authentication image information with a vibration region of a subregion of the registration image information that is positioned at a location corresponding to a location of the former subregion.
  • a means for measuring a distance between the two compared subregions is obtained using the following function:
  • A is a subregion of the registration image information and B is a subregion of the authentication image information, or where A is a subregion of the authentication image information and B is a subregion of the registration image information.
  • the distance between the two compared subregions is determined to be a smaller value between a value of the following function:
  • D(A,B) Bm'evi(nB)d( ⁇ (A), ⁇ (B')) where A is a subregion of the registration image information and B is a subregion of the authentication image information, or where A is a subregion of the authentication image information and B is a subregion of the registration image information.
  • variables used in the function are the following two progressions :
  • the matching step is the step of determining similarity using scoring function values that are obtained from distance values between subregions by applying the vibration method to all of the subregions of the registration image information or by applying the vibration method to some subregions of the registration image information.
  • the matching step includes the step of finding a correspondence between the registration image information and the authentication image information, the step of searching all the subregions of the registration image information or some subregions of the registration image information for a subregion of authentication image information corresponding to the registration subregion based on the correspondence, and the step of determining similarity using a scoring function value that is obtained from the distances between subregions by applying the vibration method to the subregion of the registration image information and the subregion of the authentication image information.
  • the matching step includes the step of finding a correspondence between the registration image information and the authentication image information, the step of searching all subregions of the registration image information or some subregions of the registration image information for a subregion of the authentication image information corresponding to the registration subregion based on the correspondence, and the step of determining similarity using a scoring function value that is obtained from distance values between the subregions by comparing the subregions of the registration image information with the subregions of the authentication image information.
  • the step of finding the correspondence comprises the step of selecting some subregions from among the subregions of the registration image information, and the step of applying the vibration method to the respective selected registration subregions and the subregions of the authentication image information corresponding to the selected registration subregions, thereby obtaining the location correspondence between the subregions of authentication image information, which represents a smallest distance value for the selected registration subregions .
  • the correspondence is any one of the translation in angular direction, the translation in radial direction, and the translation in both angular and radial directions.
  • the present invention provides a deformation-resilient iris recognition method including the step of acquiring an image of an eye, including an iris, using a camera so as to acquire a registration or authentication image for verification of identity, the step of extracting a region of the iris from the image of the eye, the step of dividing the extracted iris region into a plurality of subregions and searching for iris characteristics of respective subregions, and the matching step of determining similarity between registration iris characteristics and authentication iris characteristics, wherein an iris region division method of the step of finding the iris characteristics and the step of performing matching is performed through basic region division of dividing an iris region into a plurality of subregions and overlap region division of dividing the iris region into a plurality of subregions including the boundary lines of the basic region division.
  • the step of finding the characteristics of the iris and the step of performing matching include the step of using a scoring function value, obtained from distance values obtained through the basic region division and the overlap region division, as a value for determining the similarity.
  • the present invention provides a deformation-resilient iris recognition method including the step of acquiring an image of an eye, including an iris, using a camera so as to acquire a registration or authentication image for verification of identity, the step of extracting a region of the iris from the image of the eye, the step of dividing the extracted iris region into a plurality of subregions and searching for iris characteristics of respective subregions, and the matching step of determining similarity between registration iris characteristics and authentication iris characteristics, wherein the step of finding the iris characteristics and the step of performing matching include the step of dividing an iris region of the registration or authentication image into a plurality of subregions and determining whether occlusion has occurred in the respective subregions; the step of obtaining distances between extracted iris characteristics of the registration subregions and extracted iris characteristics of the authentication subregions for common subregions in which occlusion has not occurred and obtaining a representative value of the distance values; the step of assigning the representative value to the subregions in which oc
  • the representative value is any one of an average value, a median value and a mode value of the distance values .
  • the scoring function value is any one of a weighted average, a weighted geometric average and a weighted square average square root .
  • the deformation-resilient iris recognition method includes an eye image acquisition step, an iris region extraction step, a region division and iris characteristic finding step, and a matching step. Since the eye image acquisition step, the operation of the iris region extraction step and the region division and iris characteristic finding step are the same as those of the prior art, detailed descriptions thereof are omitted here to avoid redundancy of description, and a detailed description will be given with emphasis on the newly added steps .
  • the matching step of the deformation-resilient iris recognition method according to the present invention uses a vibration method that compares a registration vibration region with an authentication vibration region.
  • the registration vibration region includes the subregion of registration image information and an expanded region adjacent to the subregion
  • the authentication vibration region includes the subregion of authentication image information existing at a location corresponding to that of the subregion of the registration image information and an expanded region adjacent to the subregion.
  • corresponding location refers to a location at which A ⁇ j of image A corresponds to By of image B when the images A and B are compared with each other, as illustrated in Fig. 6.
  • the expanded region and the vibration method will be described in detail below.
  • a region transformed into a polar coordinate system is divided into a plurality of unit regions (hereinafter referred to as a ⁇ cell' or a ⁇ subregion' ) .
  • cell A 1 - may be partitioned off from image A
  • cell B 1 - may be partitioned off from image B at a location corresponding to the location of the cell A 1 -,.
  • not only the cell A 1 -, but also its expanded regions may be defined in the image A
  • not only the cell B 1 -, but also its expanded regions may be defined in the image B.
  • this specific cell its expanded regions, each including a specific cell, are referred to as the Vibration region' of the specific cell.
  • a vibration region includes a set of expanded regions, each including the cell B 1D .
  • An expanded region may be composed of regions that respectively correspond to rectangular regions that are centered at the four corners of the cell B 1 -,. If the boundary of the expanded region is further expanded, the B 1 -, may be configured to include 8 more rectangular regions .
  • the vibration method is a technique that, for the compared images A and B, A 1 -, of the image A and B 1D of the image B, which are positioned at corresponding locations, are not compared with each other, but the vibration region of a specific cell of one of the images A and B is compared with a vibration region of the other image that is positioned around a location corresponding to that of the specific cell.
  • the matching step of a deformation-resilient iris recognition method is the step of comparing a subregion of the registration image information with the vibration region of the subregion of the authentication image information that is positioned at a location corresponding to that of the former subregion.
  • the matching step of a deformation-resilient iris recognition method is the step of comparing a subregion of the authentication image information with the vibration region of the subregion of the registration image information that is positioned at a location corresponding to that of the former subregion.
  • a technique in which, in the use of the above- described vibration method, when two images are compared with each other, the vibration region of a cell in one of the images is not defined but the vibration region of a cell that is positioned in the other image at a location corresponding to that of the specific cell is defined , and then the cell in the former image is compared with the vibration region of the cell in the latter image, is referred to as an "asymmetric vibration method.”
  • the distances between a designated cell A 1 -, of image A and the vibration regions of cell B 13 , placed at a location corresponding to that of the cell A 13 are calculated, and the smallest of the distance values is determined to be the distance between the designated cell A 13 and the corresponding cell B 13 .
  • the image A may be a registration image and the image B may be an authentication image, and vice versa.
  • the matching step of a deformation-resilient iris recognition method is the step of comparing a subregion of the registration image information with the vibration region of the subregion of the authentication image information that is positioned at a location corresponding to that of the former subregion and comparing a subregion of the authentication image information with the vibration region of the subregion of the registration image information that is positioned at a location corresponding to that of the former subregion.
  • a vibration method may be a symmetric vibration method of obtaining distances between cell A 13 of image A and the vibration regions of corresponding cell B 13 , obtaining the distances between cell Bi j of image B and the vibration regions of corresponding cell Ai j , and determining the shortest of the obtained distances to be the distance between the subregions of the two compared images.
  • a means for measuring the distance between the two compared subregions is obtained using the following function:
  • the distance between two compared subregions is determined to be the smaller value between the value of the following function:
  • an equation for obtaining the distance between two subregions for a distance measuring means is obtained from the following function:
  • an image transformed into a polar coordinate system is determined to include 256X64 pixels, as illustrated in Fig. 8, where a lateral direction corresponds to angular coordinates and a vertical direction corresponds to radial coordinates .
  • a total of 128 cells are obtained.
  • each of the cells includes 16X8 pixels, and each of the pixels has a gray value ranging from 0 to 255.
  • Fig. 2 is a graphic showing an image of an eye, and the gray data thereof in 3D form.
  • a region to which the vibration method is applied is defined for each unit cell.
  • examples of expanded regions are shown for each unit cell.
  • the subregions in the vibration region may be represented by ordered pairs, and the ordered pairs may be indicated as shown in the right view of Fig. 9.
  • the ordered pair (2, 3) implies that a corresponding point moves by 2 pixels in the lateral direction and by 3 pixels in the vertical direction.
  • the expansion shown in the right view of Fig. 9 is achieved by determining a total of 12 ordered pairs ranging from (-16, -8) to (16, 8) .
  • a vibration region composed of one corresponding cell (0, 0) and 12 expanded regions is generated.
  • the ordered pairs (-16, 0), (-16, 8) and (-8, 4) in region (1) and the ordered pairs (16, -8), (8, - 4), (16, 0), (8, 4) and (16, 8) in region (4) are arranged in opposite directions due to the periodicity.
  • v ⁇ B is defined as a set of vibration regions for cell B.
  • A is a subregion of registration image information and B is a subregion of authentication image information, or A is a subregion of authentication image information and B is a subregion of registration image information.
  • the matching step of a deformation-resilient iris recognition method is the step of determining similarity using a scoring function value that is obtained from distance values between subregions by applying the vibration method to all the subregions of the registration image information or by applying the vibration method to some subregions of the registration image information.
  • a range to which the vibration is applied is the determination of similarity using a scoring function value that is obtained from the distance values between subregion by applying the vibration method to all the subregions of the registration image information or some subregions of the registration image information.
  • the matching step of a deformation-resilient iris recognition method includes the step of finding the correspondence between the registration image information and the authentication image information, the step of searching all the subregions of the registration image information or some subregions of the registration image information for the subregion of the authentication image information corresponding to the registration subregion based on the correspondence, and the step of determining similarity using a scoring function value that is obtained from the distances between the subregions by applying the vibration method to the subregion of the registration image information and the subregion of the authentication image information. That is, the matching step includes the step of finding the correspondence, the step of searching for the subregion, and the step of applying the vibration method.
  • the step of finding the correspondence is the step of selecting some subregions from among the subregions of the registration image information and applying the vibration method to the respective selected registration subregions and the subregions of authentication image information corresponding to the selected registration subregions, thereby obtaining the location correspondence between the subregions of authentication image information that indicates the smallest distance value for the selected registration subregions.
  • the step of searching for the subregion is the step of searching all the subregions of the registration image information or some subregions of the registration image information for the subregion of authentication image information to which the correspondence has been applied.
  • the step of applying the vibration method is the step of determining similarity using a scoring function value that is obtained from the distances between subregions by applying the vibration method to the subregion of the registration image information and the subregion of the authentication image information.
  • the correspondence may be translation that indicates an error occurring during the setting of a boundary between an iris region and another region (pupil or sclera) or an error occurring because of the deformation of the iris .
  • vibration is performed over a narrower region, thereby reducing the overall time required for authentication.
  • the matching step of a deformation-resilient iris recognition method includes the step of finding the correspondence between the registration image information and the authentication image information, the step of searching all the subregions of the registration image information or some subregions of the registration image information for the subregion of the authentication image information corresponding to the registration subregion based on the correspondence, and the step of determining similarity using a scoring function value that is obtained from the distance values between the subregions by comparing the subregions of the registration image information with the subregions of the authentication image information.
  • the vibration method is not used at the step of performing comparison on the subregions of the authentication image information after the correspondence has been found, therefore the overall time required for authentication can be further reduced.
  • the step of finding the correspondence of a deformation-resilient iris recognition method includes the step of selecting some subregions from among the subregions of the registration image information and applying the vibration method to the respective selected registration subregions and the subregions of authentication image information corresponding to the selected subregions, thereby obtaining the location correspondence between the subregions of authentication image information that indicate the smallest distance value for the selected registration subregions.
  • four cells are selected from among the respective subregions of the registration image information, and the respective cells are compared with the vibration regions of authentication image information corresponding to the selected subregions, thereby searching for the subregion of authentication image information that indicates the smallest distance.
  • An average value that is obtained by acquiring information about expansion between the subregion of authentication image information and the subregion of the registration image information, which represents the smallest distance value, for the selected four cells and averaging the pieces of information may be the correspondence .
  • portions that are acquired by applying the correspondence to the subregions of the registration image information are portions indicated by dotted lines, as shown in Fig. 10, that is, the corresponding subregions of authentication image.
  • the correspondence of a deformation-resilient iris recognition method according to still another embodiment of the present invention be any one of the translation in the angular direction, the translation in the radial direction, and the translation in both angular and radial directions.
  • the vibration method of the deformation-resilient iris recognition method functions to prevent the recognition rate from being reduced due to an error occurring during the setting of an iris region and another region (pupil or sclera) or an error occurring because of the deformation of an iris.
  • a deformation-resilient iris recognition method includes an eye image acquisition step, an iris region extraction step, a region division and iris characteristic finding step, and a matching step. Since the eye image acquisition step and the iris region extraction step are the same as those of the prior art, detailed descriptions thereof are omitted here to avoid redundancy of description, and a detailed description is given with emphasis on the newly added steps.
  • the step of finding the characteristics of an iris and the step of performing matching are respectively the steps of finding the characteristics of the iris and measuring similarity through the basic region division of dividing an iris region into a plurality of subregions, and the overlap region division of dividing the iris region into a plurality of subregions including the boundary lines of the basic region division.
  • similarity is measured by applying the step of finding iris characteristics and the step of performing matching to the existing unit subregions (hereinafter referred to as ⁇ basic subregions'), and similarity is also measured by applying the step of finding iris characteristics and the step of performing matching to overlap subregions that are composed of regions including the boundaries of the basic regions (hereinafter referred to as ⁇ overlap subregions' ) .
  • an overlap region division method using the boundaries between the subregions in the basic division is employed, so that the loss of image variation pattern information around the boundaries between regions is prevented and additional information is acquired from the image variation pattern information, thereby improving reliability. Furthermore, when frequency transform is performed on cells, into which an image is divided, so as to compare two images, all the frequency information is not used but the size of frequency that provides important information varies with the size of a cell. Accordingly, unique frequency information that varies with the size of a cell can be additionally obtained.
  • the image information of image variation pattern information around boundaries that is lost during the frequency transform of respective cells in the basic division can be recovered using the overlap subregions in the overlap division, therefore the more accurate measurement of similarity can be performed.
  • the additional regions may be regions that include the boundaries of existing subregions.
  • the additional regions may be regions that include the boundaries of existing subregions.
  • This principle can be used in the angular direction in the same manner. Furthermore, it may be possible to form a scoring function by assigning appropriate weights after calculating distances by comparing cells formed through the basic region division with cells formed through the overlap region division.
  • the step of finding the characteristics of the iris and the step of performing matching include the step of using a scoring function value, obtained from distance values obtained through the basic region division and the overlap region division, as a value for determining similarity.
  • the scoring function value be any one of a weighted average, a weighted geometric average and a weighted square average square root .
  • a deformation-resilient iris recognition method includes an eye image acquisition step, an iris region extraction step, a region division and iris characteristic finding step, and a matching step. Since the eye image acquisition step and the iris region extraction step are the same as those of the prior art, detailed descriptions thereof are omitted here to avoid redundancy of description, and a detailed description is given with emphasis on the newly added steps.
  • the iris characteristic finding step and matching step of the deformation-resilient iris recognition method according to the present invention include the step of comparing iris characteristics, extracted from the subregions, with each other and performing determination using an occlusion evaluation method.
  • the occlusion evaluation method includes the step of determining whether occlusion has occurred; the step of obtaining the representative value of common subregions; the step of assigning the representative value to a subregion in which occlusion has occurred; and the step of calculating the scoring function value of all the subregions and determining similarity.
  • the step of determining whether occlusion has occurred according to the occlusion evaluation method is the step of dividing the iris region of a registration or authentication image into a plurality of subregions and determining whether occlusion has occurred in subregions. This step is the step of eliminating subregions in which occlusion, which is one of the factors that most frequently generate an error in iris recognition, has occurred or selectively eliminating a subregion in which partial occlusion has occurred according to the extent that the occlusion has occurred.
  • Fig. 15 shows the situation in which an image of the iris region is converted into the polar coordinate system and the cells (indicated by mark "X") of the region in which 50 or more percentage of occlusion has occurred is eliminated.
  • the step of obtaining the representative value of common subregions according to the occlusion evaluation method is the step of obtaining distances between the extracted iris characteristics of the registration subregions and the extracted iris characteristics of the authentication subregions for common subregions in which the occlusion has not occurred and obtaining the representative value of the distance values.
  • image A may be a registration image and image B may be an authentication image.
  • image B may be an authentication image.
  • the region in which occlusion occurs may vary at the time of registration or authentication, distances are calculated for common cells, other than subregions in which occlusion has occurred, when two images are compared with each other.
  • Fig. 17 shows the common cell regions of the two images that are shown in Fig. 16.
  • a representative value is obtained from the distance values that are calculated from the common cell regions.
  • an average value, a median value, or a mode value be used as the representative value of the deformation-resilient iris recognition method according to still another embodiment of the present invention.
  • the step of assigning a representative value to subregions in which occlusion has occurred according to the occlusion evaluation method is the step of assigning the representative value to the cells, eliminated due to the occlusion, as the distance value.
  • the step of calculating the scoring function value of all the subregions and determining similarity according to the occlusion evaluation method is the step of determining the average value of the distance values, given to common subregions in which occlusion has not occurred and subregions in which occlusion has occurred, to be a value for determining similarity.
  • the scoring function value of the deformation-resilient iris recognition method according to another embodiment of the present invention be any one of a weighted average, a weighted geometric average, and a weighted square average square root.
  • a method of calculating a value using a geometric average as the prior art scoring function and a method of obtaining a value using a geometric average as a scoring function according to an embodiment of the present invention are described in detail as examples of obtaining a value for determining similarity below. If each of two compared images A and B is a region composed of mXn cells, the number of cells, exclusive of cells in which occlusion has occurred, is Ni and distances measured for the Ni cells are respectively di, d 2 , • • •, and d NX , the following scoring function F can be obtained:
  • each of two selected, compared images A and B is a region composed of mXn cells, the number of cells, exclusive of cells in which occlusion has occurred, is N, and distances measured for the N cells are respectively di, 6. 2 , ⁇ • ⁇ , and d N , a representative value is obtained from the measured values. It is possible to use any one, such as an average value, a median value, or a mode value.
  • the matching step of determining similarity between iris characteristics includes the step of comparing two compared images by comparing a specific region with other regions in addition to a corresponding region, so that there is the advantage of preventing the recognition rate from being reduced due to an error occurring during the setting of a boundary between an iris region and another region (pupil or sclera) or an error occurring because of the deformation of an iris.
  • FIG. 1 is a sectional view illustrating a human's iris
  • Fig. 2 is a graphic showing an image of the eye and the gray data thereof in 3-D graphic form
  • Fig. 3 is a graph illustrating FAR and FRR, which are evaluation indices for a biometric recognition system
  • Fig. 4 is schematic diagrams illustrating the situation in which a characteristic shape existing within an iris moves as the size of a pupil varies;
  • Fig. 5 is photos showing the situation in which a characteristic shape existing within an iris moves as the size of a pupil varies
  • Fig. ⁇ is schematic diagrams illustrating an expanded region according to an embodiment of the present invention
  • Fig. 7 is schematic diagrams illustrating an expanded region according to another embodiment of the present invention
  • Fig. 8 is a schematic diagram illustrating a method of dividing an image converted into the polar coordinate system
  • Fig. 9 is a schematic diagram showing examples of the expanded regions of respective unit cells.
  • Fig. 10 is a schematic diagram showing an example of the correspondence between selected subregions
  • Fig. 11 is schematic diagrams illustrating basic subregions and overlap subregions in a deformation- resilient iris recognition method according to an embodiment of the present invention
  • Fig. 12 is schematic diagrams illustrating a method of additionally acquiring image information using overlap subregions
  • Fig. 13 is schematic diagrams illustrating basic subregions and overlap subregions in a deformation- resilient iris recognition method according to another embodiment of the present invention.
  • Fig. 14 is a schematic diagram illustrating the iris region of a registration or authentication image in which occlusion has occurred
  • Fig. 15 is schematic diagrams showing the situation in which the cells of subregions in which 50 or more percentage of occlusion has occurred are excluded from an image of an iris region;
  • Fig. 16 is schematic diagrams showing the patterns of occlusion that are generated in two images that are compared with each other so as to perform authentication.
  • Fig. 17 is a schematic diagram showing the common cell regions of the two images shown in Fig. 16.

Abstract

Disclosed herein is a deformation-resilient iris recognition method. The method includes the step of acquiring an image of an eye using a camera so as to acquire a registration or authentication image for verification of identity, the step of extracting a region of the iris from the image of the eye, the step of dividing the extracted iris region into a plurality of subregions and searching for iris characteristics of subregions, and the matching step of determining similarity between registration iris characteristics and authentication iris characteristics. The matching step uses a vibration method of comparing a subregion of registration image information and a registration vibration region, including the subregion and an expanded region of the subregion, with an authentication vibration region, including a corresponding subregion of authentication image information and an expanded region of the latter subregion.

Description

DESCRIPTION
DEFORMATION-RESILIENT IRIS RECOGNITION METHODS
Technical Field
The present invention relates, in general, to an iris recognition method, and, more particularly, to a deformation-resilient iris recognition method that is capable of preventing a recognition rate from being reduced due to an error occurring during the setting of a boundary between an iris region and another region (pupil or sclera) or an error occurring because of the deformation of an iris in the iris recognition method.
In the current 21st century high-level information society, the importance of accurate verification of the identity of individuals is gradually increasing, so as to prevent the leakage of information and to protect the property rights of individuals, as well as to control access to places requiring security, unlike in the past.
Recently, biometric recognition technology, or biometrics, for verifying identity using the physical or behavioral characteristics of individuals has attracted attention. Biometrics' chief advantage as an ID is that it is immune to loss, theft, forgetfulness and reproduction, and has high reliability. Background Art
Representative examples of such biometric recognition technology include fingerprint recognition, face recognition, retina identification, and iris recognition. Iris recognition is a technology for verifying identity using an iris pattern that exists between a pupil at the center of the eye and a sclera.
A human' s iris is the colored portion of the eye having a diameter of about 11 mm, and refers to a portion outside the pupil. The iris is composed of muscles that control the size of the pupil. That is, as shown in Fig. 1, the iris functions as a diaphragm that controls the amount of light that is incident into an eyeball from the outside.
In general, such an iris recognition method includes the step of acquiring an image of an eye, including an iris, using a camera, the step of extracting the region of the iris from the image of the eye, the step of searching the extracted iris region for the unique characteristics of an individual, and the matching step of determining the similarity between the characteristics of two compared irises .
The image acquisition step is the step of acquiring an image of the eye, including an iris, using an image acquisition device that includes a Charge-Coupled Device (CCD) camera or video camera for acquiring images in real time, an illumination source for acquiring clear iris patterns, and a frame grabber for converting analog images into digital images. Fig. 2 shows an image of the eye and the gray data thereof in three-dimensional (3-D) graphic form. The iris region extraction step is the step of separating the iris region from the image of the eye acquired via the image acquisition device, and the accurate iris region extraction is necessary so as to achieve consistent iris characteristic extraction. In general, an iris region is extracted through the determination of the center of a pupil and an iris and the distances from the center. For the extraction, a method using a circular boundary detector (J. G. Daugman method) , a method using Hough transform (R. P. Wildes method) , and a method using a template are currently being used. These methods are based on the assumption that the shape of an iris boundary is circular and the fact that a pupil is darker than another region. A method of converting an iris image based on the center of a pupil or an iris and the distances from the center to the points of the iris image, determined using the method, into the polar coordinate system is widely being used.
The iris characteristic extraction step is the step of dividing an iris region, converted into the polar coordinate system, into unit cells and extracting the characteristics of patterns from the respective cells. The extracted characteristics of each pattern are represented using values that reflect variations in the gray value of the iris pattern. In this case, the effective iris characteristics may be encoded in specific form, which is referred to as iris code generation. A wavelet transform analysis method including Gabor transform is chiefly used as a method for such iris characteristic extraction.
The matching step is the step of comparing previously registered iris characteristics or a previously registered iris code with iris characteristics or an iris code, extracted from an input iris image, so as to verify identity. At this step, a person under consideration is determined to be the person himself (accept) or not to be the person himself (reject) depending on the similarity between compared characteristics or compared codes. A generally used method is a method of measuring Hamming distances. That is, to verify two characteristic vectors formed of binary vectors, a set of bit values assigned for respective dimensions are compared with another set of bit values, a new set of bit values is formed through conversion into 0 in the case of coincidence and into 1 in the case of non-coincidence, and then the sum of the resulting bit values is divided by the number of the bit values . Accordingly, in the case where the input data is identical to the registered data, the results of comparison of all bits are 0. Accordingly, a final resulting value close to 0 indicates that the input data is an authentic person's data. For non-authentic persons' data, if an appropriate threshold value is set, the appropriate threshold value can be a boundary that distinguishes an authentic person's data from non-authentic persons' data. Accordingly, a person under consideration is determined to be an authentic person if a similarity value between two compared irises is greater than the threshold value, and the person under consideration is determined not to be an authentic person if the similarity value is less than the threshold value.
A representative index for evaluating biometric recognition techniques is a recognition rate. In iris recognition, an error occurs in the case where the iris image of a person different from a registered user is determined to be the registered user' s iris image although the iris image of the different person is input, and in the case where the iris image of a person identical to a registered user is determined to be another person' s image although the iris image of the identical person is input. Accordingly, the recognition rate increases in inverse proportion to the frequency of such former cases and the frequency of such latter cases, that is, a False Acceptance Rate (hereinafter referred to as a XFAR' ) and a False Rejection Rate (hereinafter referred to as a λFRR' ) .
The FAR and the FRR may be illustrated using two distributions, as shown in Fig. 3. The left distribution is a distribution that is obtained when the similarity between two pieces of iris data of a registered person are compared with each other (a function indicating this distribution is referred to as λF(x)'), and a right distribution is a distribution that is obtained when the iris data of a registered person is compared with another person' s iris data (a function indicating this distribution is referred to as ΛG(x)'). When a specific value on the X axis is set to a threshold value, a value that is obtained by calculating an area between G(x) and the X axis in a range below the threshold value is a FAR, and a value that is obtained by calculating an area between F(x) and the X axis in a range above the threshold value is a FRR. The FAR and the FRR vary with the threshold value, and may be differently set depending on the application field.
The left distribution of Fig. 3, that is, F(x), is a distribution that has somewhat high variance due to various factors that will be described below. High variance causes the FAR and the FRR to increase, thus decreasing the recognition rate of iris recognition.
A factor having the potential to increase the FAR and the FRR in the iris recognition may be the deformation of an iris itself, and other various factors exist at the various steps of the iris recognition method.
In particular, as illustrated in Figs. 4 and 5, a human' s iris has a characteristic shape that moves differently for each person while contracting or expanding according to the size of a pupil, which is a cause of the above-described error. That is, the deformation of an iris varies with the person, and even the same person's iris does not have uniform deformation.
A representative one of the errors that may occur at various steps of the iris recognition method is an error that occurs during the setting of an iris region at the step of extracting the iris region. That is, an iris region does not exist at the same location in an acquired image due to variation in surrounding illumination, variation in the distance between a camera and a face, and the misalignment between the optical axis of a camera and the eye, and the same characteristics are not extracted due to occlusion caused by an eyebrow or eyelid, thus causing an erroneous recognition rate, that is, the FAR or the FRR, to increase.
A generally used method is a method of measuring Hamming distances. That is, to verify two characteristic vectors formed of binary vectors, a set of bit values assigned for respective dimensions are compared with another set of bit values, each bit value is converted into 0 in the case of coincidence and into 1 in the case of non- coincidence, and then the resulting value is divided by the number of total dimensions . Accordingly, in the case where the input data is identical to the registered data, the results of comparison of all bits are 0. Accordingly, results close to 0 indicate an authentic person's data. For non-authentic persons' data, if an appropriate threshold value is set, the appropriate threshold value can be a boundary that distinguishes an authentic person' s data from non-authentic persons' data. Accordingly, a person under consideration is determined to be an authentic person if a similarity value between two compared irises is greater than the threshold value, and the person under consideration is determined not to be an authentic person if the similarity value is less than the threshold value.
A representative index for evaluating biometric recognition techniques is a recognition rate. In iris recognition, an error occurs in the case where the iris image of a person different from a registered user is determined to be the registered user' s iris image although the iris image of the different person is input, and in the case where the iris image of a person identical to a registered user is determined to be another person's image although the iris image of the identical person is input. Accordingly, the recognition rate increases in inverse proportion to the frequency of such former cases and the frequency of such latter cases, that is, a False Acceptance Rate (hereinafter referred to as a ΛFAR' ) and a False Rejection Rate (hereinafter referred to as a λFRR' ) . The FAR and the FRR may be illustrated using two distributions, as shown in Fig. 3. The left distribution is a distribution that is obtained when the similarity between two pieces of iris data of a registered person are compared with each other (a function indicating this distribution is referred to as ^F(X)'), and a right distribution is a distribution that is obtained when the iris data of a registered person is compared with another person' s iris data (a function indicating this distribution is referred to as ΛG(x)') . When a specific value on the X axis is set to a threshold value, a value that is obtained by calculating an area between G(x) and the X axis in a range below the threshold value is a FAR, and a value that is obtained by calculating an area between F(x) and the X axis in a range above the threshold value is a FRR. The FAR and the FRR vary with the threshold value, and may be differently set depending on the application field.
The left distribution of Fig. 3, that is, F(x), is a distribution that has somewhat high variance due to various factors that will be described below. High variance causes the FAR and the FRR to increase, thus decreasing the recognition rate of iris recognition.
A factor having the potential to increase the FAR and the FRR in the iris recognition may be the deformation of an iris itself, and other various factors exist at the various steps of the iris recognition method. In particular, as illustrated in Fig. 4, a human's iris has a characteristic shape that moves differently for each person while contracting or expanding according to the size of a pupil, which is a cause of the above-described error. That is, the deformation of an iris varies with the person, and even the same person' s iris does not have uniform deformation.
A representative one of the errors that may occur at various steps of the iris recognition method is an error that occurs during the setting of an iris region at the step of extracting the iris region. That is, an iris region does not exist at the same location in an acquired image due to variation in surrounding illumination, variation in the distance between a camera and a face, and the misalignment between the optical axis of a camera and the eye, and the same characteristics are not extracted due to occlusion caused by an eyebrow or eyelid, thus causing an erroneous recognition rate, that is, the FAR or the FRR, to increase .
Disclosure
Technical Problem
Accordingly, the present invention has been made keeping in mind the above problems occurring in the prior art, and an object of the present invention is to provide a deformation-resilient iris recognition method that is capable of preventing a recognition rate from being reduced due to an error occurring during the setting of a boundary between an iris region and another region (pupil or sclera) or an error occurring because of the deformation of an iris in the iris recognition method.
Another object of the present invention is to provide a deformation-resilient iris recognition method that is provided with an overlap region division method so as to prevent the loss of image information existing in the boundaries between subregions when an iris region is divided into the subregions using a conventional region division method, therefore additional information can be obtained using the overlap region division method, thereby providing improved reliability.
A further object of the present invention is to provide a deformation-resilient iris recognition method that is capable of preventing a recognition rate from being reduced due to the distortion of information that occurs when occlusion exists in an iris region.
Technical Solution
In order to accomplish the above object (s), the present invention provides a deformation-resilient iris recognition method including the step of acquiring an image of an eye, including an iris, using a camera so as to acquire a registration or authentication image for verification of identity, the step of extracting a region of the iris from the image of the eye, the step of dividing the extracted iris region into a plurality of subregions and searching for iris characteristics of respective subregions, and the matching step of determining similarity between registration iris characteristics and authentication iris characteristics, wherein the matching step uses a vibration method of comparing a subregion of registration image information and a registration vibration region, including the subregion and an expanded region of the subregion, with an authentication vibration region, including a subregion of authentication image information, located at a location corresponding to a location of the subregion of the registration image information, and an expanded region of the latter subregion.
The matching step is the step of comparing the subregion of the registration image information with a vibration region of a subregion of the authentication image information that is positioned at a location corresponding to a location of the former subregion.
The matching step is the step of comparing a subregion of the authentication image information with a vibration region of a subregion of the registration image information that is positioned at a location corresponding to a location of the former subregion. The matching step is the step of comparing a subregion of the registration image information with a vibration region of a subregion of the authentication image information that is positioned at a location corresponding to a location of the former subregion, and comparing a subregion of the authentication image information with a vibration region of a subregion of the registration image information that is positioned at a location corresponding to a location of the former subregion. A means for measuring a distance between the two compared subregions is obtained using the following function:
D(A,B)= Bm'evi(nB)d(φ(A),φ(B'))
where A is a subregion of the registration image information and B is a subregion of the authentication image information, or where A is a subregion of the authentication image information and B is a subregion of the registration image information.
The distance between the two compared subregions is determined to be a smaller value between a value of the following function:
D(B,A)= Am'svi(nA)d(φ(B),φ(A'))
and a value of the following function: D(A,B)= Bm'evi(nB)d(φ(A),φ(B')) where A is a subregion of the registration image information and B is a subregion of the authentication image information, or where A is a subregion of the authentication image information and B is a subregion of the registration image information.
An equation for obtaining the distance between two subregions for a distance measuring means is obtained from the following function:
Figure imgf000016_0001
where variables used in the function are the following two progressions :
Figure imgf000016_0002
which are obtained by performing frequency transform on images of two subregions . The matching step is the step of determining similarity using scoring function values that are obtained from distance values between subregions by applying the vibration method to all of the subregions of the registration image information or by applying the vibration method to some subregions of the registration image information.
The matching step includes the step of finding a correspondence between the registration image information and the authentication image information, the step of searching all the subregions of the registration image information or some subregions of the registration image information for a subregion of authentication image information corresponding to the registration subregion based on the correspondence, and the step of determining similarity using a scoring function value that is obtained from the distances between subregions by applying the vibration method to the subregion of the registration image information and the subregion of the authentication image information.
The matching step includes the step of finding a correspondence between the registration image information and the authentication image information, the step of searching all subregions of the registration image information or some subregions of the registration image information for a subregion of the authentication image information corresponding to the registration subregion based on the correspondence, and the step of determining similarity using a scoring function value that is obtained from distance values between the subregions by comparing the subregions of the registration image information with the subregions of the authentication image information.
The step of finding the correspondence comprises the step of selecting some subregions from among the subregions of the registration image information, and the step of applying the vibration method to the respective selected registration subregions and the subregions of the authentication image information corresponding to the selected registration subregions, thereby obtaining the location correspondence between the subregions of authentication image information, which represents a smallest distance value for the selected registration subregions .
The correspondence is any one of the translation in angular direction, the translation in radial direction, and the translation in both angular and radial directions.
Furthermore, the present invention provides a deformation-resilient iris recognition method including the step of acquiring an image of an eye, including an iris, using a camera so as to acquire a registration or authentication image for verification of identity, the step of extracting a region of the iris from the image of the eye, the step of dividing the extracted iris region into a plurality of subregions and searching for iris characteristics of respective subregions, and the matching step of determining similarity between registration iris characteristics and authentication iris characteristics, wherein an iris region division method of the step of finding the iris characteristics and the step of performing matching is performed through basic region division of dividing an iris region into a plurality of subregions and overlap region division of dividing the iris region into a plurality of subregions including the boundary lines of the basic region division.
The step of finding the characteristics of the iris and the step of performing matching include the step of using a scoring function value, obtained from distance values obtained through the basic region division and the overlap region division, as a value for determining the similarity.
Furthermore, the present invention provides a deformation-resilient iris recognition method including the step of acquiring an image of an eye, including an iris, using a camera so as to acquire a registration or authentication image for verification of identity, the step of extracting a region of the iris from the image of the eye, the step of dividing the extracted iris region into a plurality of subregions and searching for iris characteristics of respective subregions, and the matching step of determining similarity between registration iris characteristics and authentication iris characteristics, wherein the step of finding the iris characteristics and the step of performing matching include the step of dividing an iris region of the registration or authentication image into a plurality of subregions and determining whether occlusion has occurred in the respective subregions; the step of obtaining distances between extracted iris characteristics of the registration subregions and extracted iris characteristics of the authentication subregions for common subregions in which occlusion has not occurred and obtaining a representative value of the distance values; the step of assigning the representative value to the subregions in which occlusion has occurred; and the step of using a scoring function value, obtained from distance values given to the common subregions in which occlusion has not occurred and the subregions in which occlusion has occurred as a value for determining similarity.
The representative value is any one of an average value, a median value and a mode value of the distance values .
The scoring function value is any one of a weighted average, a weighted geometric average and a weighted square average square root .
Reference should now be made to the drawings, in which the same reference numerals are used throughout the different drawings to designate the same or similar components .
The construction and operation of a preferred embodiment of the present invention will be described in detail with reference to the accompanying drawings below. The deformation-resilient iris recognition method according to an embodiment of the present invention includes an eye image acquisition step, an iris region extraction step, a region division and iris characteristic finding step, and a matching step. Since the eye image acquisition step, the operation of the iris region extraction step and the region division and iris characteristic finding step are the same as those of the prior art, detailed descriptions thereof are omitted here to avoid redundancy of description, and a detailed description will be given with emphasis on the newly added steps .
The matching step of the deformation-resilient iris recognition method according to the present invention uses a vibration method that compares a registration vibration region with an authentication vibration region.
The registration vibration region includes the subregion of registration image information and an expanded region adjacent to the subregion, and the authentication vibration region includes the subregion of authentication image information existing at a location corresponding to that of the subregion of the registration image information and an expanded region adjacent to the subregion. The term "corresponding location" refers to a location at which A±j of image A corresponds to By of image B when the images A and B are compared with each other, as illustrated in Fig. 6. The expanded region and the vibration method will be described in detail below.
As illustrated in Fig. 6, a region transformed into a polar coordinate system is divided into a plurality of unit regions (hereinafter referred to as a λcell' or a λsubregion' ) . For example, in Fig. 5, cell A1-, may be partitioned off from image A, and cell B1-, may be partitioned off from image B at a location corresponding to the location of the cell A1-,. Furthermore, not only the cell A1-, but also its expanded regions may be defined in the image A, and not only the cell B1-, but also its expanded regions may be defined in the image B. With respect to this specific cell, its expanded regions, each including a specific cell, are referred to as the Vibration region' of the specific cell.
For example, a vibration region includes a set of expanded regions, each including the cell B1D . An expanded region may be composed of regions that respectively correspond to rectangular regions that are centered at the four corners of the cell B1-,. If the boundary of the expanded region is further expanded, the B1-, may be configured to include 8 more rectangular regions .
Accordingly, the vibration method is a technique that, for the compared images A and B, A1-, of the image A and B1D of the image B, which are positioned at corresponding locations, are not compared with each other, but the vibration region of a specific cell of one of the images A and B is compared with a vibration region of the other image that is positioned around a location corresponding to that of the specific cell. The matching step of a deformation-resilient iris recognition method according to another embodiment of the present invention is the step of comparing a subregion of the registration image information with the vibration region of the subregion of the authentication image information that is positioned at a location corresponding to that of the former subregion.
The matching step of a deformation-resilient iris recognition method according to still another embodiment of the present invention is the step of comparing a subregion of the authentication image information with the vibration region of the subregion of the registration image information that is positioned at a location corresponding to that of the former subregion.
A technique in which, in the use of the above- described vibration method, when two images are compared with each other, the vibration region of a cell in one of the images is not defined but the vibration region of a cell that is positioned in the other image at a location corresponding to that of the specific cell is defined , and then the cell in the former image is compared with the vibration region of the cell in the latter image, is referred to as an "asymmetric vibration method."
For example, as illustrated in Fig. 6, the distances between a designated cell A1-, of image A and the vibration regions of cell B13, placed at a location corresponding to that of the cell A13, are calculated, and the smallest of the distance values is determined to be the distance between the designated cell A13 and the corresponding cell B13.
In this process, the image A may be a registration image and the image B may be an authentication image, and vice versa.
The matching step of a deformation-resilient iris recognition method according to still another embodiment of the present invention is the step of comparing a subregion of the registration image information with the vibration region of the subregion of the authentication image information that is positioned at a location corresponding to that of the former subregion and comparing a subregion of the authentication image information with the vibration region of the subregion of the registration image information that is positioned at a location corresponding to that of the former subregion.
For example, as illustrated in Fig. 7, a vibration method may be a symmetric vibration method of obtaining distances between cell A13 of image A and the vibration regions of corresponding cell B13, obtaining the distances between cell Bij of image B and the vibration regions of corresponding cell Aij, and determining the shortest of the obtained distances to be the distance between the subregions of the two compared images. In the matching step of a deformation-resilient iris recognition method according to another embodiment of the present invention, a means for measuring the distance between the two compared subregions is obtained using the following function:
D(A,B)= min d(φ(A),φ(B'))
B'ev(B) where A is a subregion of registration image information and B is a subregion of authentication image information, or where A is a subregion of the authentication image information and B is a subregion of the registration image information.
In the matching step of a deformation-resilient iris recognition method according to another embodiment of the present invention, the distance between two compared subregions is determined to be the smaller value between the value of the following function:
D(A,B)= min d(φ(A),φ(B'))
B'ι=v(B) and the value of the following function: D(B,A)= Am'evi(nA)d(φ(B),φ(A'))
where A is a subregion of the registration image information and B is a subregion of the authentication image information, or where A is a subregion of the authentication image information and B is a subregion of the registration image information. In a deformation-resilient iris recognition method according to still another embodiment of the present invention, an equation for obtaining the distance between two subregions for a distance measuring means is obtained from the following function:
Figure imgf000026_0001
where the variables used in the function are the following two progressions:
Figure imgf000026_0002
which are obtained by performing frequency transform on images of two subregions .
A method of obtaining the distance using the vibration method of a deformation-resilient iris recognition method according to an embodiment of the present invention will be described in detail below. First, an image transformed into a polar coordinate system is determined to include 256X64 pixels, as illustrated in Fig. 8, where a lateral direction corresponds to angular coordinates and a vertical direction corresponds to radial coordinates . When the transformed image is divided by 16 in the lateral direction and by 8 in the vertical direction, a total of 128 cells are obtained. In this case, each of the cells includes 16X8 pixels, and each of the pixels has a gray value ranging from 0 to 255. For example, Fig. 2 is a graphic showing an image of an eye, and the gray data thereof in 3D form.
After the region of the image has been divided as
\ described above, a region to which the vibration method is applied is defined for each unit cell. As illustrated in Fig. 9, examples of expanded regions (cell regions indicated by dotted lines) are shown for each unit cell.
Furthermore, the subregions in the vibration region may be represented by ordered pairs, and the ordered pairs may be indicated as shown in the right view of Fig. 9. For example, the ordered pair (2, 3) implies that a corresponding point moves by 2 pixels in the lateral direction and by 3 pixels in the vertical direction.
As a result, the expansion shown in the right view of Fig. 9 is achieved by determining a total of 12 ordered pairs ranging from (-16, -8) to (16, 8) . In this case, a vibration region composed of one corresponding cell (0, 0) and 12 expanded regions is generated.
A fact, to which attention must be paid in defining such a vibration region, is that an iris image in the polar coordinate system is periodic in the lateral direction (angular direction) but there is no periodicity in the vertical direction (radial direction) . Accordingly, unlike the fact that the vibration region exists in region (3) of
Fig. 9, expansion based on ordered pairs (-16, -8), (0, - 8), (16, -8), (-8, -4) and (8, -4) does not occur in region
(1) and region (2) .
Meanwhile, the ordered pairs (-16, 0), (-16, 8) and (-8, 4) in region (1) and the ordered pairs (16, -8), (8, - 4), (16, 0), (8, 4) and (16, 8) in region (4) are arranged in opposite directions due to the periodicity.
A method of obtaining the distance between the cell regions of the two images, which are defined as described above, will be described below.
If two cells for which a distance is desired to be obtained are respectively A and B, and φ is a function that performs frequency transform on cells, the following progressions are formed for the cells A and B:
Figure imgf000028_0001
Meanwhile, a function d having two progressions as variables is defined as follows:
Figure imgf000028_0002
v{B) is defined as a set of vibration regions for cell B.
In this case, the distance D (A, B) between two cells can be represented using the following Equation: D(A,B)= Bm'εvi(nB)d(φ(A),φ(B'))
In the two compared cells, A is a subregion of registration image information and B is a subregion of authentication image information, or A is a subregion of authentication image information and B is a subregion of registration image information.
The matching step of a deformation-resilient iris recognition method according to still another embodiment of the present invention is the step of determining similarity using a scoring function value that is obtained from distance values between subregions by applying the vibration method to all the subregions of the registration image information or by applying the vibration method to some subregions of the registration image information.
Accordingly, a range to which the vibration is applied is the determination of similarity using a scoring function value that is obtained from the distance values between subregion by applying the vibration method to all the subregions of the registration image information or some subregions of the registration image information.
It is preferable that the scoring function value be any one of a weighted average, a weighted geometric average, and a weighted square average square root. The matching step of a deformation-resilient iris recognition method according to still another embodiment of the present invention includes the step of finding the correspondence between the registration image information and the authentication image information, the step of searching all the subregions of the registration image information or some subregions of the registration image information for the subregion of the authentication image information corresponding to the registration subregion based on the correspondence, and the step of determining similarity using a scoring function value that is obtained from the distances between the subregions by applying the vibration method to the subregion of the registration image information and the subregion of the authentication image information. That is, the matching step includes the step of finding the correspondence, the step of searching for the subregion, and the step of applying the vibration method.
The step of finding the correspondence is the step of selecting some subregions from among the subregions of the registration image information and applying the vibration method to the respective selected registration subregions and the subregions of authentication image information corresponding to the selected registration subregions, thereby obtaining the location correspondence between the subregions of authentication image information that indicates the smallest distance value for the selected registration subregions.
The step of searching for the subregion is the step of searching all the subregions of the registration image information or some subregions of the registration image information for the subregion of authentication image information to which the correspondence has been applied.
The step of applying the vibration method is the step of determining similarity using a scoring function value that is obtained from the distances between subregions by applying the vibration method to the subregion of the registration image information and the subregion of the authentication image information.
For example, as shown in Fig. 10, the correspondence may be translation that indicates an error occurring during the setting of a boundary between an iris region and another region (pupil or sclera) or an error occurring because of the deformation of the iris .
Accordingly, when the vibration method is applied to the vibration region of the subregion of authentication image information corresponding to the registration subregion based on the correspondence, vibration is performed over a narrower region, thereby reducing the overall time required for authentication.
The matching step of a deformation-resilient iris recognition method according to still another embodiment of the present invention includes the step of finding the correspondence between the registration image information and the authentication image information, the step of searching all the subregions of the registration image information or some subregions of the registration image information for the subregion of the authentication image information corresponding to the registration subregion based on the correspondence, and the step of determining similarity using a scoring function value that is obtained from the distance values between the subregions by comparing the subregions of the registration image information with the subregions of the authentication image information.
Accordingly, the vibration method is not used at the step of performing comparison on the subregions of the authentication image information after the correspondence has been found, therefore the overall time required for authentication can be further reduced.
The step of finding the correspondence of a deformation-resilient iris recognition method according to still another embodiment of the present invention includes the step of selecting some subregions from among the subregions of the registration image information and applying the vibration method to the respective selected registration subregions and the subregions of authentication image information corresponding to the selected subregions, thereby obtaining the location correspondence between the subregions of authentication image information that indicate the smallest distance value for the selected registration subregions.
Referring to Fig. 10, an embodiment of the step of finding the correspondence will be described below.
As shown in Fig. 10, four cells are selected from among the respective subregions of the registration image information, and the respective cells are compared with the vibration regions of authentication image information corresponding to the selected subregions, thereby searching for the subregion of authentication image information that indicates the smallest distance.
An average value that is obtained by acquiring information about expansion between the subregion of authentication image information and the subregion of the registration image information, which represents the smallest distance value, for the selected four cells and averaging the pieces of information may be the correspondence . Accordingly, portions that are acquired by applying the correspondence to the subregions of the registration image information are portions indicated by dotted lines, as shown in Fig. 10, that is, the corresponding subregions of authentication image. It is preferred that the correspondence of a deformation-resilient iris recognition method according to still another embodiment of the present invention be any one of the translation in the angular direction, the translation in the radial direction, and the translation in both angular and radial directions. As described above, the vibration method of the deformation-resilient iris recognition method according to the embodiments of the present invention functions to prevent the recognition rate from being reduced due to an error occurring during the setting of an iris region and another region (pupil or sclera) or an error occurring because of the deformation of an iris.
A deformation-resilient iris recognition method according to an embodiment of the present invention includes an eye image acquisition step, an iris region extraction step, a region division and iris characteristic finding step, and a matching step. Since the eye image acquisition step and the iris region extraction step are the same as those of the prior art, detailed descriptions thereof are omitted here to avoid redundancy of description, and a detailed description is given with emphasis on the newly added steps.
The step of finding the characteristics of an iris and the step of performing matching are respectively the steps of finding the characteristics of the iris and measuring similarity through the basic region division of dividing an iris region into a plurality of subregions, and the overlap region division of dividing the iris region into a plurality of subregions including the boundary lines of the basic region division.
That is, as illustrated in Fig. 11, similarity is measured by applying the step of finding iris characteristics and the step of performing matching to the existing unit subregions (hereinafter referred to as λbasic subregions'), and similarity is also measured by applying the step of finding iris characteristics and the step of performing matching to overlap subregions that are composed of regions including the boundaries of the basic regions (hereinafter referred to as ^overlap subregions' ) .
Accordingly, an amount of information greater than the amount of information that can be obtained through conventional basic region division can be obtained, thereby increasing the reliability of obtained results.
That is, an overlap region division method using the boundaries between the subregions in the basic division is employed, so that the loss of image variation pattern information around the boundaries between regions is prevented and additional information is acquired from the image variation pattern information, thereby improving reliability. Furthermore, when frequency transform is performed on cells, into which an image is divided, so as to compare two images, all the frequency information is not used but the size of frequency that provides important information varies with the size of a cell. Accordingly, unique frequency information that varies with the size of a cell can be additionally obtained.
A detailed description is given, a region including basic subregions (a left view) and a region including overlap subregions (a right view) being taken as examples, as illustrated in Fig. 12.
The basic subregions shown in the left view of Fig.
12 allow the repetition of the pattern
Figure imgf000036_0001
to be recognized
first, and information about the pattern
Figure imgf000036_0002
can be additionally acquired using the overlap subregions shown in the right view of Fig. 12.
That is, the image information of image variation pattern information around boundaries that is lost during the frequency transform of respective cells in the basic division can be recovered using the overlap subregions in the overlap division, therefore the more accurate measurement of similarity can be performed.
Meanwhile, it is not necessary to stereotype the additional regions, as shown in Fig. 12, but the additional regions may be regions that include the boundaries of existing subregions. For example, it is possible to combine adjacent subregions in the basic region into an overlap region using one of the various methods, such as a method of combining four cells, used in the basic region division, into a single overlap region division, that is, a method of combining An, A12, A2χ and A22 into Bn, as shown in the upper embodiment of Fig. 13, and it is possible to form overlap subregions, as shown in the lower embodiment of Fig. 13. Accordingly, when the size of a cell is increased two times in the radial direction, frequency information corresponding to 1/2 times the smallest frequency can be additionally acquired. This principle can be used in the angular direction in the same manner. Furthermore, it may be possible to form a scoring function by assigning appropriate weights after calculating distances by comparing cells formed through the basic region division with cells formed through the overlap region division. The step of finding the characteristics of the iris and the step of performing matching include the step of using a scoring function value, obtained from distance values obtained through the basic region division and the overlap region division, as a value for determining similarity.
It is preferred that the scoring function value be any one of a weighted average, a weighted geometric average and a weighted square average square root .
A deformation-resilient iris recognition method according to an embodiment of the present invention includes an eye image acquisition step, an iris region extraction step, a region division and iris characteristic finding step, and a matching step. Since the eye image acquisition step and the iris region extraction step are the same as those of the prior art, detailed descriptions thereof are omitted here to avoid redundancy of description, and a detailed description is given with emphasis on the newly added steps.
The iris characteristic finding step and matching step of the deformation-resilient iris recognition method according to the present invention include the step of comparing iris characteristics, extracted from the subregions, with each other and performing determination using an occlusion evaluation method.
The occlusion evaluation method includes the step of determining whether occlusion has occurred; the step of obtaining the representative value of common subregions; the step of assigning the representative value to a subregion in which occlusion has occurred; and the step of calculating the scoring function value of all the subregions and determining similarity.
The step of determining whether occlusion has occurred according to the occlusion evaluation method is the step of dividing the iris region of a registration or authentication image into a plurality of subregions and determining whether occlusion has occurred in subregions. This step is the step of eliminating subregions in which occlusion, which is one of the factors that most frequently generate an error in iris recognition, has occurred or selectively eliminating a subregion in which partial occlusion has occurred according to the extent that the occlusion has occurred.
Images are compared with each other, with cells in which a certain or more percentage of occlusion has occurred being eliminated from respective cells of the iris region of the registration or authentication image in which occlusion has occurred as shown in Fig. 14. Fig. 15 shows the situation in which an image of the iris region is converted into the polar coordinate system and the cells (indicated by mark "X") of the region in which 50 or more percentage of occlusion has occurred is eliminated. The step of obtaining the representative value of common subregions according to the occlusion evaluation method is the step of obtaining distances between the extracted iris characteristics of the registration subregions and the extracted iris characteristics of the authentication subregions for common subregions in which the occlusion has not occurred and obtaining the representative value of the distance values.
For example, as shown in Fig. 16, image A may be a registration image and image B may be an authentication image. As shown in Fig. 16, since the region in which occlusion occurs may vary at the time of registration or authentication, distances are calculated for common cells, other than subregions in which occlusion has occurred, when two images are compared with each other.
Fig. 17 shows the common cell regions of the two images that are shown in Fig. 16.
A representative value is obtained from the distance values that are calculated from the common cell regions.
It is preferred that an average value, a median value, or a mode value be used as the representative value of the deformation-resilient iris recognition method according to still another embodiment of the present invention.
The step of assigning a representative value to subregions in which occlusion has occurred according to the occlusion evaluation method is the step of assigning the representative value to the cells, eliminated due to the occlusion, as the distance value.
The step of calculating the scoring function value of all the subregions and determining similarity according to the occlusion evaluation method is the step of determining the average value of the distance values, given to common subregions in which occlusion has not occurred and subregions in which occlusion has occurred, to be a value for determining similarity. It is preferred that the scoring function value of the deformation-resilient iris recognition method according to another embodiment of the present invention be any one of a weighted average, a weighted geometric average, and a weighted square average square root.
A method of calculating a value using a geometric average as the prior art scoring function and a method of obtaining a value using a geometric average as a scoring function according to an embodiment of the present invention are described in detail as examples of obtaining a value for determining similarity below. If each of two compared images A and B is a region composed of mXn cells, the number of cells, exclusive of cells in which occlusion has occurred, is Ni and distances measured for the Ni cells are respectively di, d2, • • •, and dNX, the following scoring function F can be obtained:
Figure imgf000041_0001
In the above method, if each of two compared images C and D is a region composed of mXn cells, the number of cells, exclusive of cells in which occlusion has occurred, is N2 and distances measured for the N2 cells are respectively di, d2, • • •, and dN2, the following scoring function F can be obtained:
Figure imgf000041_0002
However, there is a problem in that, in the above- described scoring function, the number of variables varies with compared images (in the case where Nl is different from N2), in which case it is impossible to obtain stable scoring values .
A method of calculating a scoring function according to an embodiment of the present invention will be described in detail below.
If each of two selected, compared images A and B is a region composed of mXn cells, the number of cells, exclusive of cells in which occlusion has occurred, is N, and distances measured for the N cells are respectively di, 6.2, , and dN, a representative value is obtained from the measured values. It is possible to use any one, such as an average value, a median value, or a mode value.
If the representative value is uniformly assigned to (mXn - N) cell regions in which occlusion has occurred, the following scoring function F can be obtained:
Figure imgf000042_0001
Accordingly, since a uniform value can be assigned to all of the cells regardless of selected images using the method of calculating a scoring function according to the above described embodiment of the present invention, more stable data can be obtained.
It is apparent to those skilled in the art to which the present invention pertains that the present invention is not limited to the embodiments, but various modifications/variations can be made without departing from the technical gist of the present invention.
Advantageous Effects
As described above, in accordance with the deformation-resilient iris recognition method according to the present invention, the matching step of determining similarity between iris characteristics includes the step of comparing two compared images by comparing a specific region with other regions in addition to a corresponding region, so that there is the advantage of preventing the recognition rate from being reduced due to an error occurring during the setting of a boundary between an iris region and another region (pupil or sclera) or an error occurring because of the deformation of an iris.
Description of Drawings Fig. 1 is a sectional view illustrating a human's iris;
Fig. 2 is a graphic showing an image of the eye and the gray data thereof in 3-D graphic form;
Fig. 3 is a graph illustrating FAR and FRR, which are evaluation indices for a biometric recognition system;
Fig. 4 is schematic diagrams illustrating the situation in which a characteristic shape existing within an iris moves as the size of a pupil varies;
Fig. 5 is photos showing the situation in which a characteristic shape existing within an iris moves as the size of a pupil varies;
Fig. β is schematic diagrams illustrating an expanded region according to an embodiment of the present invention; Fig. 7 is schematic diagrams illustrating an expanded region according to another embodiment of the present invention;
Fig. 8 is a schematic diagram illustrating a method of dividing an image converted into the polar coordinate system;
Fig. 9 is a schematic diagram showing examples of the expanded regions of respective unit cells;
Fig. 10 is a schematic diagram showing an example of the correspondence between selected subregions; Fig. 11 is schematic diagrams illustrating basic subregions and overlap subregions in a deformation- resilient iris recognition method according to an embodiment of the present invention;
Fig. 12 is schematic diagrams illustrating a method of additionally acquiring image information using overlap subregions;
Fig. 13 is schematic diagrams illustrating basic subregions and overlap subregions in a deformation- resilient iris recognition method according to another embodiment of the present invention;
Fig. 14 is a schematic diagram illustrating the iris region of a registration or authentication image in which occlusion has occurred;
Fig. 15 is schematic diagrams showing the situation in which the cells of subregions in which 50 or more percentage of occlusion has occurred are excluded from an image of an iris region;
Fig. 16 is schematic diagrams showing the patterns of occlusion that are generated in two images that are compared with each other so as to perform authentication; and
Fig. 17 is a schematic diagram showing the common cell regions of the two images shown in Fig. 16.

Claims

1. A deformation-resilient iris recognition method including the step of acquiring an image of an eye, including an iris, using a camera so as to acquire a registration or authentication image for verification of identity, the step of extracting a region of the iris from the image of the eye, the step of dividing the extracted iris region into a plurality of subregions and searching for iris characteristics of respective subregions, and the matching step of determining similarity between registration iris characteristics and authentication iris characteristics, wherein the matching step uses a vibration method of comparing a subregion of registration image information and a registration vibration region, including the subregion and an expanded region of the subregion, with an authentication vibration region, including a subregion of authentication image information, located at a location corresponding to a location of the subregion of the registration image information, and an expanded region of the latter subregion.
2. The deformation-resilient iris recognition method as set forth in claim 1, wherein the matching step is the step of comparing the subregion of the registration image information with a vibration region of a subregion of the authentication image information that is positioned at a location corresponding to a location of the former subregion.
3. The deformation-resilient iris recognition method as set forth in claim I1 wherein the matching step is the step of comparing a subregion of the authentication image information with a vibration region of a subregion of the registration image information that is positioned at a location corresponding to a location of the former subregion.
4. The deformation-resilient iris recognition method as set forth in claim 1, wherein the matching step is the step of comparing a subregion of the registration image information with a vibration region of a subregion of the authentication image information that is positioned at a location corresponding to a location of the former subregion, and comparing a subregion of the authentication image information with a vibration region of a subregion of the registration image information that is positioned at a location corresponding to a location of the former subregion.
5. The deformation-resilient iris recognition method as set forth in claim 2 or 3, wherein a means for measuring a distance between the two compared subregions is obtained using the following function:
D(A,B)= min d(φ(A),φ(B'))
B'ev(B) where A is a subregion of the registration image information and B is a subregion of the authentication image information, or where A is a subregion of the authentication image information and B is a subregion of the registration image information.
6. The deformation-resilient iris recognition method as set forth in claim 4, wherein a distance between the two compared subregions is determined to be a smaller value between a value of the following function:
D(B,A)= min d(φ(B),φ(A))
Λ'ev(A) and a value of the following function: D(A,B)= min d(φ(A),φ(B'))
B'ev(B) where A is a subregion of the registration image information and B is a subregion of the authentication image information, or where A is a subregion of the authentication image information and B is a subregion of the registration image information.
7. The deformation-resilient iris recognition method as set forth in claim 5 or 6, wherein an equation for obtaining the distance between two subregions for a distance measuring means is obtained from the following function:
d((an),(bH)) = (∑t\ allt -bHk \>γ
4=1 where variables used in the function are the following two progressions :
Figure imgf000049_0001
which are obtained by performing frequency transform on images of two subregions .
8. The deformation-resilient iris recognition method as set forth in claim 1, wherein the matching step is the step of determining similarity using scoring function values that are obtained from distance values between subregions by applying the vibration method to all of the subregions of the registration image information or by applying the vibration method to some subregions of the registration image information.
9. The deformation-resilient iris recognition method as set forth in claim 1, wherein the matching step comprises the step of finding correspondence between the registration image information and the authentication image information, the step of searching all the subregions of the registration image information or some subregions of the registration image information for a subregion of authentication image information corresponding to the registration subregion based on the correspondence, and the step of determining similarity using a scoring function value that is obtained from the distances between subregions by applying the vibration method to the subregion of the registration image information and the subregion of the authentication image information.
10. The deformation-resilient iris recognition method as set forth in claim 1, wherein the matching step comprises the step of finding correspondence between the registration image information and the authentication image information, the step of searching all subregions of the registration image information or some subregions of the registration image information for a subregion of the authentication image information corresponding to the registration subregion based on the correspondence, and the step of determining similarity using a scoring function value that is obtained from distance values between the subregions by comparing the subregions of the registration image information with the subregions of the authentication image information .
11. The deformation-resilient iris recognition method as set forth in claim 9 or 10, wherein the step of finding the correspondence comprises the step of selecting some subregions from among the subregions of the registration image information, and the step of applying the vibration method to the respective selected registration subregions and the subregions of the authentication image information corresponding to the selected registration subregions, thereby obtaining the location correspondence between the subregions of authentication image information, which represents a smallest distance value for the selected registration subregions.
12. The deformation-resilient iris recognition method as set forth in claim 11, wherein the correspondence is any one of translation in an angular direction, translation in a radial direction, and translation in both angular and radial directions .
13. A deformation-resilient iris recognition method comprising the step of acquiring an image of an eye, including an iris, using a camera so as to acquire a registration or authentication image for verification of identity, the step of extracting a region of the iris from the image of the eye, the step of dividing the extracted iris region into a plurality of subregions and searching for iris characteristics of respective subregions, and the matching step of determining similarity between registration iris characteristics and authentication iris characteristics , wherein an iris region division method of the step of finding the iris characteristics and the step of performing matching is performed through basic region division of dividing an iris region into a plurality of subregions and overlap region division of dividing the iris region into a plurality of subregions including the boundary lines of the basic region division.
14. The deformation-resilient iris recognition method as set forth in claim 13, wherein the step of finding the characteristics of the iris and the step of performing matching comprise the step of using a scoring function value, obtained from distance values obtained through the basic region division and the overlap region division, as a value for determining the similarity.
15. A deformation-resilient iris recognition method comprising the step of acquiring an image of an eye, including an iris, using a camera so as to acquire a registration or authentication image for verification of identity, the step of extracting a region of the iris from the image of the eye, the step of dividing the extracted iris region into a plurality of subregions and searching for iris characteristics of respective subregions, and the matching step of determining similarity between registration iris characteristics and authentication iris characteristics, wherein the step of finding the iris characteristics and the step of performing matching comprise the step of dividing an iris region of the registration or authentication image into a plurality of subregions and determining whether occlusion has occurred in the respective subregions; the step of obtaining distances between extracted iris characteristics of the registration subregions and extracted iris characteristics of the authentication subregions for common subregions in which occlusion has not occurred and obtaining a representative value of the distance values; the step of assigning the representative value to the subregions in which occlusion has occurred; and the step of using a scoring function value, obtained from distance values given to the common subregions in which occlusion has not occurred and the subregions in which occlusion has occurred as a value for determining similarity.
16. The deformation-resilient iris recognition method as set forth in any one of claim 15, wherein the representative value is any one of an average value, a median value and a mode value of the distance values.
17. The deformation-resilient iris recognition method as set forth in any one of claims 8, 9, 10, 14 and 15, wherein the scoring function value is any one of a weighted average, a weighted geometric average and a weighted square average square root .
PCT/KR2006/004630 2006-02-27 2006-11-07 Deformation-resilient iris recognition methods WO2007097510A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2006-0018930 2006-02-27
KR1020060018930A KR100786204B1 (en) 2006-02-27 2006-02-27 Deformation-resilient iris recognition methods

Publications (1)

Publication Number Publication Date
WO2007097510A1 true WO2007097510A1 (en) 2007-08-30

Family

ID=38437535

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2006/004630 WO2007097510A1 (en) 2006-02-27 2006-11-07 Deformation-resilient iris recognition methods

Country Status (2)

Country Link
KR (1) KR100786204B1 (en)
WO (1) WO2007097510A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009067738A1 (en) * 2007-11-27 2009-06-04 Wavefront Biometric Technologies Pty Limited Biometric authentication using the eye
US20130194407A1 (en) * 2010-05-13 2013-08-01 Dae-hoon Kim Apparatus and method for iris recognition using multiple iris templates
US9036872B2 (en) 2010-08-26 2015-05-19 Wavefront Biometric Technologies Pty Limited Biometric authentication using the eye
CN108304085A (en) * 2017-01-11 2018-07-20 神盾股份有限公司 Judge the method and electronic device of finger direction of displacement
CN111553384A (en) * 2020-04-03 2020-08-18 上海聚虹光电科技有限公司 Matching method of multispectral image and single-spectral image

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102483446B1 (en) 2022-06-14 2023-01-06 전병준 Universal milling machine device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5291560A (en) * 1991-07-15 1994-03-01 Iri Scan Incorporated Biometric personal identification system based on iris analysis
US6526160B1 (en) * 1998-07-17 2003-02-25 Media Technology Corporation Iris information acquisition apparatus and iris identification apparatus
US6546121B1 (en) * 1998-03-05 2003-04-08 Oki Electric Industry Co., Ltd. Method and apparatus for identifying an iris

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5291560A (en) * 1991-07-15 1994-03-01 Iri Scan Incorporated Biometric personal identification system based on iris analysis
US6546121B1 (en) * 1998-03-05 2003-04-08 Oki Electric Industry Co., Ltd. Method and apparatus for identifying an iris
US6526160B1 (en) * 1998-07-17 2003-02-25 Media Technology Corporation Iris information acquisition apparatus and iris identification apparatus

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009067738A1 (en) * 2007-11-27 2009-06-04 Wavefront Biometric Technologies Pty Limited Biometric authentication using the eye
US8718335B2 (en) 2007-11-29 2014-05-06 Wavefront Biometric Technologies Pty Limited Biometric authentication using the eye
US9704039B2 (en) 2007-11-29 2017-07-11 Wavefront Biometric Technologies Pty Limited Biometric authentication using the eye
US20130194407A1 (en) * 2010-05-13 2013-08-01 Dae-hoon Kim Apparatus and method for iris recognition using multiple iris templates
US9036872B2 (en) 2010-08-26 2015-05-19 Wavefront Biometric Technologies Pty Limited Biometric authentication using the eye
CN108304085A (en) * 2017-01-11 2018-07-20 神盾股份有限公司 Judge the method and electronic device of finger direction of displacement
CN108304085B (en) * 2017-01-11 2021-01-01 神盾股份有限公司 Method for judging finger displacement direction and electronic device
CN111553384A (en) * 2020-04-03 2020-08-18 上海聚虹光电科技有限公司 Matching method of multispectral image and single-spectral image

Also Published As

Publication number Publication date
KR100786204B1 (en) 2007-12-17
KR20070088982A (en) 2007-08-30

Similar Documents

Publication Publication Date Title
CN107438854B (en) System and method for performing fingerprint-based user authentication using images captured by a mobile device
JP6212099B2 (en) Image template masking
JP4719809B2 (en) Person identification method and photographing apparatus
US20060008124A1 (en) Iris image-based recognition system
KR20050025927A (en) The pupil detection method and shape descriptor extraction method for a iris recognition, iris feature extraction apparatus and method, and iris recognition system and method using its
KR102554391B1 (en) Iris recognition based user authentication apparatus and method thereof
KR20190016733A (en) Method for recognizing partial obscured face by specifying partial area based on facial feature point, recording medium and apparatus for performing the method
Oh et al. Extracting sclera features for cancelable identity verification
CN110647955A (en) Identity authentication method
US20120308089A1 (en) Method of biometric authentication by using pupil border and apparatus using the method
CN113614731A (en) Authentication verification using soft biometrics
WO2007097510A1 (en) Deformation-resilient iris recognition methods
Gawande et al. Improving iris recognition accuracy by score based fusion method
Gupta et al. Iris recognition system using biometric template matching technology
KR101476173B1 (en) User authentication method and system using iris characteristic
US9031289B2 (en) Method for comparing iris images by the intelligent selection of textured areas
Mohammed et al. Conceptual analysis of Iris Recognition Systems
JP2005227933A (en) Biometrics collation speed-increasing method
KR102316587B1 (en) Method for biometric recognition from irises
CN113553890A (en) Multi-modal biological feature fusion method and device, storage medium and equipment
KR100924271B1 (en) Identification system and method using a iris, and media that can record computer program sources thereof
Sharma et al. Fingerprint matching Using Minutiae Extraction Techniques
Vincy et al. Recognition technique for ATM based on iris technology
Subbarayudu et al. A novel iris recognition system
Rawate et al. Human identification using IRIS recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06812466

Country of ref document: EP

Kind code of ref document: A1