WO2015056210A2 - Method of authenticating a person - Google Patents

Method of authenticating a person Download PDF

Info

Publication number
WO2015056210A2
WO2015056210A2 PCT/IB2014/065366 IB2014065366W WO2015056210A2 WO 2015056210 A2 WO2015056210 A2 WO 2015056210A2 IB 2014065366 W IB2014065366 W IB 2014065366W WO 2015056210 A2 WO2015056210 A2 WO 2015056210A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
eye
person
iris
distance
Prior art date
Application number
PCT/IB2014/065366
Other languages
French (fr)
Other versions
WO2015056210A3 (en
Inventor
Yuko ROODT
Jonathan Anton CLAASSENS
Charl VAN DEVENTER
Original Assignee
University Of Johannesburg
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Of Johannesburg filed Critical University Of Johannesburg
Publication of WO2015056210A2 publication Critical patent/WO2015056210A2/en
Publication of WO2015056210A3 publication Critical patent/WO2015056210A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Definitions

  • the invention relates to method of authenticating a person and more specifically, but not exclusively, to eye biometric authentication using the conjunctiva of the eye.
  • United States patent number 6,665,426 in the name of West Virginia Research Corporation entitled “Method of biometric identification of an individual and associated apparatus” discloses a method of identifying an individual including the steps of impinging radiation on perilimbal structures of the eye and employing acquired image information acquired from palisades and interpalisades for comparison with stored image information to determine in an identify match has been obtained.
  • the palisades and interpalisades are normally not visible unless the eye and/or eyelids and forebrows are placed in a certain position thereby minimizing the risk of inadvertent delivery of this information.
  • a plurality of palisades and interpalisades are monitored in terms of at least one physical characteristic such as width, length, angular orientation, shape branching pattern, curvature and spacing with the parameters being converted to binary data sequence form.
  • United States patent number 7,327,860 in the name of West Virginia University and The Curators of the University of Missouri entitled " Conjunctival Scans for Personal Identification” discloses obtaining conjunctival vascular images from an individual's eye for purposes of creating a multi-dimension, non-iris based biometric.
  • the biometric can be used for identification or authentication purposes.
  • Multi-dimensional correlation processing can be used to evaluate pairs of biometrics.
  • United States patent number 8,369,595 in the name of EyeVerify LLC entitled " Texture features for biometric authentication” discloses technologies relating to biometric authentication based on images of the eye. It discloses a first aspect embodied in methods that include obtaining one or more image regions from a first image of an eye. Each of the image regions may include a view of a respective portion of the white of the eye. The method may further include applying several distinct filters to each of the image regions to generate a plurality of respective descriptors for the region. The several distinct filters may include convolutional filters that are each configured to describe one or more aspects of an eye vasculature and in combination describe a visible eye vasculature in a feature space. A match score may be determined based on the generated descriptors and based on one or more descriptors associated with a second image of eye vasculature.
  • the method may include the step of detecting edges or gradients in the image portion.
  • the edges may be evaluated to locate partial circles in the image portion.
  • the partial circles may be located using the generalized Hough transform.
  • a modified form of the generalized Hough transform, wherein points along an edge vote in terms of direction and distance, to determine the radius or diameter of the partial circle, may also be used.
  • the method may include the step of extracting circular image portions about the features.
  • the method may include the step of generating descriptors of the features.
  • the descriptors may be stored as intensity values including a relative location of the centre and relative size of the feature.
  • the method may include the step of discarding colour information of the image portion.
  • the method may include the step of normalizing the contrast of the image portion.
  • the method may include the step of compiling a descriptor list relating to the person.
  • Matching may be done by comparing descriptors in a first list to descriptors stored in a second list.
  • the method may include the step of authenticating the person if the number of descriptors in the first list matching descriptors in the second list is above a predetermined threshold.
  • the method may include the step of prompting the person to direct his gaze in a requested direction.
  • a system for eye biometric authentication comprising:
  • an electronic device including a processor and memories
  • the device including a capturing device for capturing a series of images
  • a method for detecting an eye in an image comprising the steps of:
  • each image including the relative location of an iris of the eye
  • the lines include information relating to the distance from the points; o determining intersections of portions of lines at proximate distances; o calculating a centre and size of a partial circle formed by the points;
  • the method may include the preceding step of compiling the number of images of facial features.
  • the image including a view of an eye of the person with a known direction of gaze
  • the extracted portion is part of the sclera or white of the eye and is extracted on a side of the iris opposing the direction of gaze of the person.
  • the step of detecting the position of the iris includes detecting the size, as a radius or diameter in pixels, of the iris.
  • the step of detecting the iris may include detecting both irises of the person including the size and position of the irises.
  • the method may include the step of estimating pose, tilt, position, and angles of a head of the person from the size and position of the irises.
  • the step of detecting the iris may be preceded by the step of detecting an eye of the person in the image. Detecting the eye may be in the form of detecting both eyes in the image.
  • the step of detecting the eyes may be performed making use of a Haar-cascade classifier which is pre-trained to locate eyes.
  • the step of detecting the iris may include
  • the evaluation may be performed by evaluating the derivative of the average intensities against the distance from the evaluated pixel and finding a local maxima therein;
  • the arcs may be portions of the perimeter of an ellipse with a height equal to the distance from the source pixel wherein the portions are between an upper and lower arc angle.
  • the ellipse may be a circle wherein the width of the ellipse is equal to the height thereof.
  • the arc angles may be measured from the horizontal, wherein the upper arc angle is the angle between the horizontal and a line between the source pixel and the upper extremity of the arc, and the lower arc angle is the angle between the horizontal and the lower extremity of the arc.
  • the arc angle may be between 10 degrees and 60 degrees.
  • the source pixel with the highest local maxima indicates the position of the iris and the distance from the source pixel at which the maxima occurs indicates the radius of the iris in pixels.
  • the ellipticality or eccentricity of the arcs is changed by changing the ratio of the width to the height of an ellipse of which the arc forms part of.
  • the ratio is changed between a value of 1 , wherein the width and height of the ellipse are equal, and 0.5 wherein the width is half of the height of the ellipse.
  • the extracted portion may be extracted by extracting a rectangular portion on a side opposing the gaze of the person.
  • the rectangle may be a rectangle with an inner edge at a distance equal to half the height of the iris, the outer edge at a distance of twice the height from the inner edge, and upper and lower edges being separated by a distance equal to the height.
  • the inner edge is the edge closest to the iris on a side opposing the direction of gaze of the person.
  • Figure 1 is a schematic representation of a person using a mobile computing device for eye biometric authentication and an image obtained from the mobile computing device;
  • Figure 2 is a schematic representation depicting aspects of an eye biometric authentication system and method
  • Figure 3 is a schematic representation showing various directions of gaze of an eye
  • Figure 4 is a schematic representation showing rotation of an eye in the image plane
  • Figure 5 is a schematic representation depicting aspects of detecting an eye in an image
  • Figure 6 is a schematic representation depicting aspects of detecting an eye in an image
  • Figure 7 is a schematic representation of image portion containing eyes and processing steps done on the portion;.
  • Figure 8 is a schematic representation of an extracted image portion containing an eye and a schematic representation of a source pixel, upper and lower bounds, and an arc of pixels;
  • Figure 9 is a schematic representation of arcs with changing ellipticality
  • Figure 10 is a schematic representation of a source pixel, upper and lower bounds, and an arc of pixels, a graph showing intensity against distance, and a representation of a portion of an image containing an eye
  • Figure 1 1 is a schematic representation depicting aspects of compiling a database of descriptors
  • Figure 12 is a schematic representation showing image portions containing eyes after various stages of image processing
  • Figure 13 is a schematic representation showing features detected in an image including an enlarged view of one of the features
  • Figure 14 is a schematic representation of a feature descriptor
  • Figure 15 is a schematic representation depicting aspects of sampling a feature
  • Figure 16 is a schematic representation of a matching procedure.
  • the method includes the step of obtaining an image 8 of, at least part of, a person 5.
  • the image may be obtained by capturing a series of images 2, for example the frames of a video stream from a webcam 3, connected to a computer 4, or a mobile phone's 4 built-in camera 3.
  • the image 8 is obtained from a mobile computing device, such as a tablet computer 4, which includes a front-facing camera 3 which captures images of the person 5.
  • At least some of the images 2 should include a portion containing an image of an eye 6.
  • the camera 3 need not be a general purpose camera such as would typically be used in a webcam 3 or phone 4 and may be a specialized capturing device capable of capturing a series of images 2 which represent radiation emitted from a person's 5 face 312 and eye(s) 6 regardless of whether the captured radiation is in the visible spectrum of light or not.
  • a display 306 of the tablet 4 displays the image(s) 8 being captured and may also include instructions 308 to the person 5 to direct his gaze in a specific direction (indicated by arrow numbered 307 in figure 1 and being to the right of the person 5 and left relative to the drawing view).
  • the image 8 includes a view of at least one eye 6 of the person 5 with a known direction of gaze 307.
  • the system 1 typically includes a processor along with associated memories for executing instructions including the method of eye biometric authentication.
  • the processor and memories are typically located in the device 4 and an algorithm embodying the method is executed using the processor and memories.
  • Each image 8 is individually evaluated to determine whether or not an eye 6 is present in the image 8. Embodiments of detecting the eye 6, and more specifically the iris 321 , in the image is described in more detail below. If the eye 6 is not detected or if it is detected that the eye 6 is too far away or slightly out of the frame 8, the device 4 should prompt a user 5 to move to the correct position within the frame 8.
  • the method 200 of detecting an eye 6 in an image 8 includes the step of detecting edges in the image 8a.
  • Edge detection is known in the art and typically identifies points in the image 8a where brightness changes sharply, the points are arranged into sets forming curved lines known as edges.
  • points in the image 8 wherein the first order derivative is greater than a predetermined threshold may be used.
  • edges may be located about the person's 5 eyebrow 201 , outline of the lips 202, at the periphery of the nose 203, and at the transition from the iris to the sclera of the eye 204.
  • a feature extraction technique such as the generalized Hough transform, may be employed to detect partial circles among the edges.
  • the detected partial circles will typically include the curvature on the side of the nose 205, the curvature of the eyebrow 206, and the exposed portions of the edge of the iris (shown as dashed line 207).
  • a modified form of the generalized Hough transform wherein votes cast 208 include both direction and distance to determine approximate centres (209, 210, and 21 1 ), and radiuses (213, 214, and 215) associated with the respective partial circles (205, 206, and 207) may be used.
  • an image 8a may include further partial circles, for example partial circle 212, which are not used in the current example but may be utilized in a similar fashion.
  • a square image portion 216 surrounding the centre 209 is extracted with the size of the image portion side roughly four times the radius 213.
  • the extracted image portion 216 is re-sampled to a predetermined number of pixels and stored along with the radius 213.
  • the image portion 216 is compared to a database 217 of templates 218.
  • the templates 218 are classified as eye-features, being features which contain an eye, (for example 218a, 218b, and 218c) and eyeless features, being features not containing eyes (for example 218d).
  • Eye- feature templates 218 include the relative position 220 of an iris relative to the centre 219 thereof.
  • the image portion 216 may either be discarded as an eyeless feature, or the possible position of the iris in the image portion 216 can be determined.
  • each partial circle detected and matched to an eye-feature template 218 in the database 217 gives an indication of the position of the iris in the image 8 and if the indications are close enough an eye 6 is detected.
  • the image 8 is first processed to extract a portion of the image containing an eye pair 310.
  • This may be done using, for example, a Haar-cascade-classifier which has been pre-trained to detect an eye pair 310.
  • the classifier may also be used to detect individual eyes (6r and 6I), or a face 312 with some changes to the extraction process described below.
  • the irises 321 it is also possible to detect the irises 321 , in accordance with the procedure described further below, to estimate the pose, tilt, and rotation of the eye pair and adjust the image (8 or 31 0) such that the eyes are positioned on the horizontal 31 1 .
  • the eye pair portion 310 is further processed, in the current example using fixed dimensions, to extract a portions of the image containing the right eye 6r or the left eye 6I. This is done by removing a percentage 313 of 5% (five percent) of the image portion from each side of the image portion 310. The remaining image portion is then divided into thirds 314 wherein the first third will contain the right eye 6r and the last third will contain the left eye 6I. This provides an image portion 320 which contains an eye 6 for further processing.
  • arc refers to portions of the perimeter of an ellipse 329a or circle 329b, between upper and lower arc angles, or a combined arc angle 337 where the upper and lower arc angles are equal.
  • the arc is a portion of the ellipse 329 which falls within the angle 337 on a side opposing the direction of gaze 307 of the person 5.
  • the portion 320 is processed to determine the position of an iris 321 in the image portion 320.
  • the iris 321 is detected by searching for rapid increases in intensity of the image 320 in the circular or elliptical domain. This is done by evaluating, for every source pixel 322 in the portion 320, the average intensity of pixels 323 on arcs
  • 325 is increased between a minimum distance 326 and a maximum distance 327 which is typically expressed as the radius or height 325 measured in pixels.
  • the term height and radius as used herein refers to the distance from the source pixel to an extremity of the ellipse.
  • the average intensity of all pixels 323 falling on the arc at the radial distance 325 is recorded. This data may be represented by the graph shown in figure 10 and is typically stored electronically.
  • the ellipticality, or eccentricity, of the arc is then changed (as illustrated in figure 9) by decreasing the width 328 relative to the height 325 between ratios of 1 , wherein the width 328 is equal to the height 325 resulting in a circular arc, and 0.5 wherein the width 328 is half the height 325.
  • the ellipticality is changed to account for changes in the ellipticality, in the image plane, of the person's 5 iris 321 as a result of directing his gaze to a side. For each ratio of ellipticality the pixels on the arcs between the minimum distance 26 and maximum distance 27 are evaluated and the average intensities of the pixels 323 on the arcs 324 are recorded.
  • the recorded average intensities against distance 325 from the source pixel 322, for each ratio of ellipticality may be represented as a series of graphs 330.
  • various forms of electronic data structures may be used in computer code to record and evaluate the data, such as arrays or lists or complex data structures or objects.
  • the recorded data 330 is differentiated to determine the rate of change of the intensities against distance 325. This will provide, in appropriate circumstances and where there is a rapid increase in intensity, data structures which may be represented as graph 331 .
  • the data is evaluated to find a sudden increase in intensity 332 by finding local maxima 333 in the derivative data 331 .
  • the maximum rate of change 334 per dataset 331 is compared to the maximum rate of change 334 of all other datasets for every pixel in the image portion 320 and for every ratio of ellipticality.
  • the pixel having the highest peak value in its dataset 331 is selected as the centre 335 of the iris 321 and the distance 336 at which the highest value in dataset 331 is found is recorded as the radius (for circular arcs) or height 336 (of elliptical arcs) of the iris 321 .
  • the rectangular portion 340 will contain part of the vascular structure 17 of the sclera or white of the eye and biometric authentication may be performed thereon.
  • a database 217 is compiled from pre-recorded footage 222 by detecting partial circles and extracting image portions 224 from frames 223 in the footage 222.
  • the image portions 224 are compared to each other and similar portions are grouped together and the values thereof combined to create templates 218. Additional information is manually added to each template 218, additional information may include:
  • the position of the eye may be indicated outside of the template 218, in the extended image space 221 .
  • a user 5 to be authenticated will use the device 4 and camera 3 to capture images 8 of himself 5. If an eye 6 is detected within the frame 8, it is first determined whether the eye 6 is the left or right eye of the user 5. This is done during the eye detection processes described above. Once it is known whether or not the image contains the left or right eye 6, the direction of the gaze of the eye may be determined. The gaze may be, in the case of a left eye 6I being detected, to the right 10, left 1 1 , straight ahead 12, up 13, or down 14 (as shown in Figure 3). Again, the device 4 will prompt 308 the user 5 to direct his/her gaze in the required direction 307.
  • the prompt may be audible, as the user 5 will be directing his/her gaze in a direction other than the direction of the device 4, and may be in the form of a voice prompt, wherein the device gives voice directions to the person to appropriately direct his/her gaze.
  • the prompt may also be beeps or another audible noise wherein the user 5 may be trained to recognize the noise and react appropriately.
  • An important step in authentication making use of eye biometrics is to determine whether the sequence of images represent a living person 5, in contrast to, for example, a picture of a user's eye. This is referred to in the art as "liveness testing". Inherent in the process described above is the requirement that the user direct his/her gaze in different directions during the capture of the series of images 2. Within each image wherein an eye is detected, the direction of the gaze is also determined. Liveness testing is correspondingly done by ensuring that there is a difference in the direction of gaze at various parts of the series of images 2. It may also be necessary to randomize the sequence of requested gaze directions in order to prevent a series of images, such as a video clip played on a mobile device, to be given as input to bypass the liveness testing.
  • an appropriate image portion (15 or 340) it may be processed to obtain the unique features of the eye 6, and more specifically the eye vasculature 17.
  • the vein structure of a person's 5 eye is sufficiently unique to, within an acceptable margin of error, determine whether two images 8 of an eye, taken at different time intervals, belong to the same person 5.
  • the steps to obtain and record the information present in a person's 5 eye in electronic form is described below. Initially, a number of pre-processing steps need to be performed. Specifically, the colour information in the image 15a is discarded, effectively making the image a greyscale image 15b. Thereafter, the image may be normalized 15c to compensate for varying light sources and shadows which are present at the time of capturing the image.
  • the normalized greyscale image 15c is processed by converting the image to Gaussian scale space using known image processing methods, such as the difference of Gaussians (DoG) method.
  • DoG difference of Gaussians
  • the feature enhancement and detection algorithm need not be limited to the DoG method as various other edge or corner detection algorithms may be used and optimized to identify features in appropriately extracted image portions (15 or 340).
  • the identified features 16 are extracted by extracting circular regions 16 around the centre point 19 of the feature.
  • the circular region 16 has a diameter of 63 pixels.
  • Each circular region 16 is then processed to obtain a descriptor 18, representing an electronic description of the feature.
  • the descriptor 18 is generated by sampling the circular region at various distances (shown as I in Figure 7) and angles (shown as a in Figure 7) from the centre thereof. In the current example, the region is sampled at incrementing distances of one (1 ) pixel. These samples are recorded in a data structure, which may be represented as shown in figure 14.
  • the down- sample is performed to give greater weight to detail closer to the centre 19 of the circular image region 16. As the distance 20 from the centre 19 increases, the amount of data present also increases.
  • the data is down sampled by averaging samples between distances, for example:
  • pixels between the centre and a radius of 3 pixels are grouped together and the average values of the image at various angles are recorded, values between 3 pixels and 7 pixels are averaged and recorded, and so on.
  • the circular region may be sampled according to a predetermined pattern (as shown in figure 15).
  • the direction 24 in which the image portion 16 is sampled is also recorded.
  • the samples 25 may be obtained as averages of the underlying higher resolution pixel values of the image portion. This allows the image portion 16 be rotated and matched to a template 18.
  • the descriptor 18 values are normalized between 0 and 1 in floating point format and thereafter converted to an 8 bit unsigned integer to reduce storage space.
  • This descriptor 18 along with information relating to the size 21 (relative to the size of the image portion 16) and relative position (in the image portion 16) of each detected feature are recorded.
  • the list of descriptors 23 may be stored for comparison later or compared to a previously stored descriptor list.
  • a user may be authenticated when the descriptor list 23a, obtained during an authentication process ("the new list”), is obtained as described above and compared to a descriptor list 23b obtained earlier using the same method and stored ("the old list”). This is referred to in the art as matching. Matching requires that, within a certain threshold and margin for error, the descriptors in the new list match the descriptors in the old list. The process is described below.
  • a database 22 with previously stored descriptors 18 is provided and may be compiled by generating descriptor lists 23 for users as described above.
  • the new list 23a is obtained from a user 5 being authenticated as described above.
  • the old list 23b associated with the user 5 is retrieved from the database 22.
  • the new list 23a is compared to the old list by comparing a first descriptor 18a(i) and finding the closest descriptor 18b(i) in the old list 23b.
  • a second descriptor 18a(ii) is matched to descriptor 18b(ii). This process is repeated for each descriptor 18 in the new list 23a up to and including matching 18a(iii) with 1 8b(iii).
  • the lists 23 need not be exactly the same size and only the closest matches will be used in the subsequent step.
  • a smaller group (in the current example using three) of matched descriptors (18a and 18b) are chosen randomly and the point set fitting of the centres 19 of the new descriptors 18b are calculated to determine estimate rotation, translation and scaling between the centres in the old list 23b and the new list 23a. Further, a value is calculated representing the error in fitting the points as calculated using the sum of the positional errors whilst compensating with the best estimation for rotation, translation and scale. This process is repeated with random groups of three descriptors from the old and new lists and recorded and analyzed as a probability distribution function using the kernel density estimation method to obtain the closest rotation, translation, and scale estimate.
  • the new descriptors 18a may then be re- matched, taking into account the closest rotation, translation, and scale estimates, to the positionally closest old descriptors 18b. If the percentage of matching descriptors is greater than a predetermined threshold, the user will be authenticated.
  • the invention will provide a method and system for eye biometric authentication which is robust and can be performed on consumer devices such as smart phones, tablets or computers with webcams. It is further envisaged that the method and system will provide an alternative to existing biometric methods and systems. It is further envisaged that the method will provide means of extracting portions of the image containing the sclera and/or parts of the vascular structures of the eye with efficiency and accuracy.
  • the invention is not limited to the precise details as described herein.
  • the user instead of prompting the user to direct his or her gaze in the appropriate direction using audible prompts, the user may be directed to follow a moving dot on a tablet screen or be given written instructions.
  • a number images of eye(s) may be used instead of using a single image to compile a single descriptor list.

Abstract

The invention relates to method of authenticating a person (5). The method includes the steps of obtaining images (8) which includes an eye (6) of the person (5) therein, determining the side and direction of gaze (307) of the eye (6), extracting a portion of the image (15), detecting features in the extracted portion, comparing the detected features to stored features relating to the person (5), and authenticating the person (5) if the detected features match the stored features. Extracting a portion of the image is preceded by the determining the size and position of an iris (321) and the image portion (310) is extracted on a side of the iris (321) opposing the direction of gaze (307). The size and position of the extracted portion (340) of the image is based on the size and position of the iris (321).

Description

METHOD OF AUTHENTICATING A PERSON FIELD OF THE INVENTION
The invention relates to method of authenticating a person and more specifically, but not exclusively, to eye biometric authentication using the conjunctiva of the eye.
BACKGROUND TO THE INVENTION
Identification and authentication using biometrics, and specifically eye and conjunctival biometrics, are known in the art.
United States patent number 6,665,426 in the name of West Virginia Research Corporation entitled "Method of biometric identification of an individual and associated apparatus" discloses a method of identifying an individual including the steps of impinging radiation on perilimbal structures of the eye and employing acquired image information acquired from palisades and interpalisades for comparison with stored image information to determine in an identify match has been obtained. The palisades and interpalisades are normally not visible unless the eye and/or eyelids and forebrows are placed in a certain position thereby minimizing the risk of inadvertent delivery of this information. In a preferred approach a plurality of palisades and interpalisades are monitored in terms of at least one physical characteristic such as width, length, angular orientation, shape branching pattern, curvature and spacing with the parameters being converted to binary data sequence form. United States patent number 7,327,860 in the name of West Virginia University and The Curators of the University of Missouri entitled " Conjunctival Scans for Personal Identification" discloses obtaining conjunctival vascular images from an individual's eye for purposes of creating a multi-dimension, non-iris based biometric. The biometric can be used for identification or authentication purposes. Multi-dimensional correlation processing can be used to evaluate pairs of biometrics.
United States patent number 8,369,595 in the name of EyeVerify LLC entitled " Texture features for biometric authentication" discloses technologies relating to biometric authentication based on images of the eye. It discloses a first aspect embodied in methods that include obtaining one or more image regions from a first image of an eye. Each of the image regions may include a view of a respective portion of the white of the eye. The method may further include applying several distinct filters to each of the image regions to generate a plurality of respective descriptors for the region. The several distinct filters may include convolutional filters that are each configured to describe one or more aspects of an eye vasculature and in combination describe a visible eye vasculature in a feature space. A match score may be determined based on the generated descriptors and based on one or more descriptors associated with a second image of eye vasculature.
One of the problems with the prior art is that it is difficult to identify and extract relevant portions of an image for eye biometric authentication. OBJECT OF THE INVENTION
It is accordingly an object of the invention to provide a method of authenticating a person which, at least partially, alleviates some of the problems associated with the prior art.
SUMMARY OF THE INVENTION
In accordance with the invention there is provided a method of authenticating a person comprising the steps of:
- obtaining an image of part of a person;
- detecting an eye of the person in the image;
- determining the side, direction of gaze, and orientation of the eye;
- extracting a portion of the image containing the eye;
- detecting features in the extracted portion; and
- matching the detected features to stored features relating to the person.
The method may include the step of detecting edges or gradients in the image portion.
The edges may be evaluated to locate partial circles in the image portion. The partial circles may be located using the generalized Hough transform. A modified form of the generalized Hough transform, wherein points along an edge vote in terms of direction and distance, to determine the radius or diameter of the partial circle, may also be used.
The method may include the step of extracting circular image portions about the features.
The method may include the step of generating descriptors of the features.
The descriptors may be stored as intensity values including a relative location of the centre and relative size of the feature.
The method may include the step of discarding colour information of the image portion.
The method may include the step of normalizing the contrast of the image portion.
The method may include the step of compiling a descriptor list relating to the person.
Matching may be done by comparing descriptors in a first list to descriptors stored in a second list.
The method may include the step of authenticating the person if the number of descriptors in the first list matching descriptors in the second list is above a predetermined threshold. The method may include the step of prompting the person to direct his gaze in a requested direction.
According to a second aspect of the invention there is provided a system for eye biometric authentication comprising:
- an electronic device including a processor and memories;
- the device including a capturing device for capturing a series of images; and
- stored instructions for executing a method of authenticating a person as described above.
According to a third aspect of the invention there is provided a method for detecting an eye in an image comprising the steps of:
- providing a number of images of facial features;
- each image including the relative location of an iris of the eye;
- detecting partial circles in the image by:
o locating points having brightness gradients greater than a predetermined threshold in the image;
o projecting lines from a number of proximate points,
o the lines include information relating to the distance from the points; o determining intersections of portions of lines at proximate distances; o calculating a centre and size of a partial circle formed by the points;
- comparing the image portions in circles constructed from the partial circles with the images of facial features; and
- calculating the position of the eye from the relative locations of the iris relating to the images of the facial features. The method may include the preceding step of compiling the number of images of facial features.
In accordance with another aspect of the invention there is provided a method of eye biometric authentication of a person comprising the steps of:
- obtaining an image of, at least part of, the person;
- the image including a view of an eye of the person with a known direction of gaze;
- determining the position of an iris of the person in the image; and
- using the determined position of the iris to extract a portion of the image for
- performing eye biometric authentication using the extracted portion.
The extracted portion is part of the sclera or white of the eye and is extracted on a side of the iris opposing the direction of gaze of the person.
The step of detecting the position of the iris includes detecting the size, as a radius or diameter in pixels, of the iris.
The step of detecting the iris may include detecting both irises of the person including the size and position of the irises. The method may include the step of estimating pose, tilt, position, and angles of a head of the person from the size and position of the irises. The step of detecting the iris may be preceded by the step of detecting an eye of the person in the image. Detecting the eye may be in the form of detecting both eyes in the image.
The step of detecting the eyes may be performed making use of a Haar-cascade classifier which is pre-trained to locate eyes.
The step of detecting the iris may include
- evaluating the average intensity of pixels on arcs of increasing arc distance from a source pixel being evaluated;
- increasing the arc distance from an initial distance to a final distance from the source pixel, wherein the arc distance may be expressed as a radius or height measured in pixels;
- recording the average intensities for each arc between the initial distance and the final distance;
- changing the ellipticality of the arcs and repeating the previous steps;
- evaluating the recorded average intensities on arcs to find an increase in intensity, the evaluation may be performed by evaluating the derivative of the average intensities against the distance from the evaluated pixel and finding a local maxima therein;
- comparing the found maxima of source pixels to each other and recording the position and distance of the highest maxima among all source pixels.
The arcs may be portions of the perimeter of an ellipse with a height equal to the distance from the source pixel wherein the portions are between an upper and lower arc angle. The ellipse may be a circle wherein the width of the ellipse is equal to the height thereof.
The arc angles may be measured from the horizontal, wherein the upper arc angle is the angle between the horizontal and a line between the source pixel and the upper extremity of the arc, and the lower arc angle is the angle between the horizontal and the lower extremity of the arc.
The arc angle may be between 10 degrees and 60 degrees.
The source pixel with the highest local maxima indicates the position of the iris and the distance from the source pixel at which the maxima occurs indicates the radius of the iris in pixels.
The ellipticality or eccentricity of the arcs is changed by changing the ratio of the width to the height of an ellipse of which the arc forms part of.
The ratio is changed between a value of 1 , wherein the width and height of the ellipse are equal, and 0.5 wherein the width is half of the height of the ellipse.
The extracted portion may be extracted by extracting a rectangular portion on a side opposing the gaze of the person. Specifically, the rectangle may be a rectangle with an inner edge at a distance equal to half the height of the iris, the outer edge at a distance of twice the height from the inner edge, and upper and lower edges being separated by a distance equal to the height. The inner edge is the edge closest to the iris on a side opposing the direction of gaze of the person.
BRIEF DESCRIPTION OF THE DRAWINGS
The invention will now be further described, by way of example only, with reference to the accompanying drawings wherein:
Figure 1 is a schematic representation of a person using a mobile computing device for eye biometric authentication and an image obtained from the mobile computing device;
Figure 2 is a schematic representation depicting aspects of an eye biometric authentication system and method;
Figure 3 is a schematic representation showing various directions of gaze of an eye;
Figure 4 is a schematic representation showing rotation of an eye in the image plane;
Figure 5 is a schematic representation depicting aspects of detecting an eye in an image
Figure 6 is a schematic representation depicting aspects of detecting an eye in an image
Figure 7 is a schematic representation of image portion containing eyes and processing steps done on the portion;.
Figure 8 is a schematic representation of an extracted image portion containing an eye and a schematic representation of a source pixel, upper and lower bounds, and an arc of pixels;
Figure 9 is a schematic representation of arcs with changing ellipticality;
Figure 10 is a schematic representation of a source pixel, upper and lower bounds, and an arc of pixels, a graph showing intensity against distance, and a representation of a portion of an image containing an eye
Figure 1 1 is a schematic representation depicting aspects of compiling a database of descriptors;
Figure 12 is a schematic representation showing image portions containing eyes after various stages of image processing;
Figure 13 is a schematic representation showing features detected in an image including an enlarged view of one of the features;
Figure 14 is a schematic representation of a feature descriptor;
Figure 15 is a schematic representation depicting aspects of sampling a feature;
and
Figure 16 is a schematic representation of a matching procedure.
DETAILED DESCRIPTION OF THE DRAWINGS
With reference to the drawings, in which like features are indicated by like numerals, a method of authenticating a person is generally designated by reference numeral λ -
IMAGE CAPTURE
The method includes the step of obtaining an image 8 of, at least part of, a person 5. The image may be obtained by capturing a series of images 2, for example the frames of a video stream from a webcam 3, connected to a computer 4, or a mobile phone's 4 built-in camera 3. In this example the image 8 is obtained from a mobile computing device, such as a tablet computer 4, which includes a front-facing camera 3 which captures images of the person 5. At least some of the images 2 should include a portion containing an image of an eye 6. The camera 3 need not be a general purpose camera such as would typically be used in a webcam 3 or phone 4 and may be a specialized capturing device capable of capturing a series of images 2 which represent radiation emitted from a person's 5 face 312 and eye(s) 6 regardless of whether the captured radiation is in the visible spectrum of light or not.
A display 306 of the tablet 4 displays the image(s) 8 being captured and may also include instructions 308 to the person 5 to direct his gaze in a specific direction (indicated by arrow numbered 307 in figure 1 and being to the right of the person 5 and left relative to the drawing view). The image 8 includes a view of at least one eye 6 of the person 5 with a known direction of gaze 307.
The system 1 typically includes a processor along with associated memories for executing instructions including the method of eye biometric authentication. The processor and memories are typically located in the device 4 and an algorithm embodying the method is executed using the processor and memories.
EYE DETECTION
Each image 8 is individually evaluated to determine whether or not an eye 6 is present in the image 8. Embodiments of detecting the eye 6, and more specifically the iris 321 , in the image is described in more detail below. If the eye 6 is not detected or if it is detected that the eye 6 is too far away or slightly out of the frame 8, the device 4 should prompt a user 5 to move to the correct position within the frame 8.
First Embodiment
The method 200 of detecting an eye 6 in an image 8 includes the step of detecting edges in the image 8a. Edge detection is known in the art and typically identifies points in the image 8a where brightness changes sharply, the points are arranged into sets forming curved lines known as edges. Alternatively, points in the image 8 wherein the first order derivative is greater than a predetermined threshold may be used. In the current example, and for images of a person's 5 face in general, edges may be located about the person's 5 eyebrow 201 , outline of the lips 202, at the periphery of the nose 203, and at the transition from the iris to the sclera of the eye 204.
A feature extraction technique, such as the generalized Hough transform, may be employed to detect partial circles among the edges. The detected partial circles will typically include the curvature on the side of the nose 205, the curvature of the eyebrow 206, and the exposed portions of the edge of the iris (shown as dashed line 207). A modified form of the generalized Hough transform, wherein votes cast 208 include both direction and distance to determine approximate centres (209, 210, and 21 1 ), and radiuses (213, 214, and 215) associated with the respective partial circles (205, 206, and 207) may be used. It is important to note that an image 8a may include further partial circles, for example partial circle 212, which are not used in the current example but may be utilized in a similar fashion. For each partial circle identified, the procedure as described hereafter with reference to partial circle 205 is followed. A square image portion 216 surrounding the centre 209 is extracted with the size of the image portion side roughly four times the radius 213. The extracted image portion 216 is re-sampled to a predetermined number of pixels and stored along with the radius 213. At this stage the image portion 216 is compared to a database 217 of templates 218. The templates 218 are classified as eye-features, being features which contain an eye, (for example 218a, 218b, and 218c) and eyeless features, being features not containing eyes (for example 218d). The compilation of this database is described in more detail further below. Eye- feature templates 218 include the relative position 220 of an iris relative to the centre 219 thereof. Once the image portion 216 is matched to an appropriate template 218, using the computed relative scale and rotational information, the image portion may either be discarded as an eyeless feature, or the possible position of the iris in the image portion 216 can be determined. This way, each partial circle detected and matched to an eye-feature template 218 in the database 217 gives an indication of the position of the iris in the image 8 and if the indications are close enough an eye 6 is detected.
Second embodiment
In this embodiment, and with reference specifically to figure 7, the image 8 is first processed to extract a portion of the image containing an eye pair 310. This may be done using, for example, a Haar-cascade-classifier which has been pre-trained to detect an eye pair 310. It will be appreciated that the classifier may also be used to detect individual eyes (6r and 6I), or a face 312 with some changes to the extraction process described below. At this stage it is also possible to detect the irises 321 , in accordance with the procedure described further below, to estimate the pose, tilt, and rotation of the eye pair and adjust the image (8 or 31 0) such that the eyes are positioned on the horizontal 31 1 . The eye pair portion 310 is further processed, in the current example using fixed dimensions, to extract a portions of the image containing the right eye 6r or the left eye 6I. This is done by removing a percentage 313 of 5% (five percent) of the image portion from each side of the image portion 310. The remaining image portion is then divided into thirds 314 wherein the first third will contain the right eye 6r and the last third will contain the left eye 6I. This provides an image portion 320 which contains an eye 6 for further processing.
For clarity, the term arc as used below refers to portions of the perimeter of an ellipse 329a or circle 329b, between upper and lower arc angles, or a combined arc angle 337 where the upper and lower arc angles are equal. The arc is a portion of the ellipse 329 which falls within the angle 337 on a side opposing the direction of gaze 307 of the person 5.
The portion 320 is processed to determine the position of an iris 321 in the image portion 320. The iris 321 is detected by searching for rapid increases in intensity of the image 320 in the circular or elliptical domain. This is done by evaluating, for every source pixel 322 in the portion 320, the average intensity of pixels 323 on arcs
324 of increasing radial distance 325 from the source pixel 322. The radial distance
325 is increased between a minimum distance 326 and a maximum distance 327 which is typically expressed as the radius or height 325 measured in pixels. For clarity, the term height and radius as used herein refers to the distance from the source pixel to an extremity of the ellipse. For every arc 324 between the minimum and maximum distance, the average intensity of all pixels 323 falling on the arc at the radial distance 325 is recorded. This data may be represented by the graph shown in figure 10 and is typically stored electronically. The ellipticality, or eccentricity, of the arc is then changed (as illustrated in figure 9) by decreasing the width 328 relative to the height 325 between ratios of 1 , wherein the width 328 is equal to the height 325 resulting in a circular arc, and 0.5 wherein the width 328 is half the height 325. The ellipticality is changed to account for changes in the ellipticality, in the image plane, of the person's 5 iris 321 as a result of directing his gaze to a side. For each ratio of ellipticality the pixels on the arcs between the minimum distance 26 and maximum distance 27 are evaluated and the average intensities of the pixels 323 on the arcs 324 are recorded.
For demonstration purposes, the recorded average intensities against distance 325 from the source pixel 322, for each ratio of ellipticality may be represented as a series of graphs 330. Those skilled in the art will appreciate that various forms of electronic data structures may be used in computer code to record and evaluate the data, such as arrays or lists or complex data structures or objects. The recorded data 330 is differentiated to determine the rate of change of the intensities against distance 325. This will provide, in appropriate circumstances and where there is a rapid increase in intensity, data structures which may be represented as graph 331 . The data is evaluated to find a sudden increase in intensity 332 by finding local maxima 333 in the derivative data 331 . The maximum rate of change 334 per dataset 331 is compared to the maximum rate of change 334 of all other datasets for every pixel in the image portion 320 and for every ratio of ellipticality. The pixel having the highest peak value in its dataset 331 is selected as the centre 335 of the iris 321 and the distance 336 at which the highest value in dataset 331 is found is recorded as the radius (for circular arcs) or height 336 (of elliptical arcs) of the iris 321 .
With the position 335 and height 336 of the iris known, it is possible to extract the portion of image 320 containing the sclera or white of the person's 5 eye 6. This is done by extracting a rectangular portion 340, with a width 341 equal to four times the height 336 of the iris 321 and a height 342 equal to twice the height 336 of the iris with its inner edge 343 (the edge closest to the iris 321 ) offset 344 by the height 336 of the iris 321 in a direction opposing the direction of gaze 307.
The rectangular portion 340 will contain part of the vascular structure 17 of the sclera or white of the eye and biometric authentication may be performed thereon.
TEMPLATE DATABASE
A database 217 is compiled from pre-recorded footage 222 by detecting partial circles and extracting image portions 224 from frames 223 in the footage 222. The image portions 224 are compared to each other and similar portions are grouped together and the values thereof combined to create templates 218. Additional information is manually added to each template 218, additional information may include:
- whether the template contains an eye, and if so,
- whether it is a right eye or a left eye,
- the direction of gaze of the eye,
- the relative location of the eye; and - the rotation of the eye in the image plane;
In another embodiment, wherein an eye is not visible in the image portion 224, the position of the eye may be indicated outside of the template 218, in the extended image space 221 .
USE
In use, a user 5 to be authenticated will use the device 4 and camera 3 to capture images 8 of himself 5. If an eye 6 is detected within the frame 8, it is first determined whether the eye 6 is the left or right eye of the user 5. This is done during the eye detection processes described above. Once it is known whether or not the image contains the left or right eye 6, the direction of the gaze of the eye may be determined. The gaze may be, in the case of a left eye 6I being detected, to the right 10, left 1 1 , straight ahead 12, up 13, or down 14 (as shown in Figure 3). Again, the device 4 will prompt 308 the user 5 to direct his/her gaze in the required direction 307. The prompt may be audible, as the user 5 will be directing his/her gaze in a direction other than the direction of the device 4, and may be in the form of a voice prompt, wherein the device gives voice directions to the person to appropriately direct his/her gaze. The prompt may also be beeps or another audible noise wherein the user 5 may be trained to recognize the noise and react appropriately.
Once appropriate frames 8 with eye(s) having the appropriate direction of gaze are captured, the rotation (z as shown in figure 4) of the eye in the image 8 plane is determined. Frames 8 with resolution below a predetermined threshold (wherein the eye is too far away from the camera) are eliminated. A sharpness biometric is calculated to select a frame 8 with the greatest degree of sharpness from the frames with a high enough resolution. Thereafter, making use of the rotation to rotate the image, a rectangular portion 15 of the image is extracted for subsequent processes described below.
An important step in authentication making use of eye biometrics is to determine whether the sequence of images represent a living person 5, in contrast to, for example, a picture of a user's eye. This is referred to in the art as "liveness testing". Inherent in the process described above is the requirement that the user direct his/her gaze in different directions during the capture of the series of images 2. Within each image wherein an eye is detected, the direction of the gaze is also determined. Liveness testing is correspondingly done by ensuring that there is a difference in the direction of gaze at various parts of the series of images 2. It may also be necessary to randomize the sequence of requested gaze directions in order to prevent a series of images, such as a video clip played on a mobile device, to be given as input to bypass the liveness testing.
Once an appropriate image portion (15 or 340) is obtained it may be processed to obtain the unique features of the eye 6, and more specifically the eye vasculature 17. The vein structure of a person's 5 eye is sufficiently unique to, within an acceptable margin of error, determine whether two images 8 of an eye, taken at different time intervals, belong to the same person 5. The steps to obtain and record the information present in a person's 5 eye in electronic form is described below. Initially, a number of pre-processing steps need to be performed. Specifically, the colour information in the image 15a is discarded, effectively making the image a greyscale image 15b. Thereafter, the image may be normalized 15c to compensate for varying light sources and shadows which are present at the time of capturing the image.
The normalized greyscale image 15c is processed by converting the image to Gaussian scale space using known image processing methods, such as the difference of Gaussians (DoG) method. This allows features 16 present in the image, not necessarily limited to vein features, to be detected. It is important to note that the feature enhancement and detection algorithm need not be limited to the DoG method as various other edge or corner detection algorithms may be used and optimized to identify features in appropriately extracted image portions (15 or 340).
The identified features 16 are extracted by extracting circular regions 16 around the centre point 19 of the feature. In the example herein described, the circular region 16 has a diameter of 63 pixels. Each circular region 16 is then processed to obtain a descriptor 18, representing an electronic description of the feature. The descriptor 18 is generated by sampling the circular region at various distances (shown as I in Figure 7) and angles (shown as a in Figure 7) from the centre thereof. In the current example, the region is sampled at incrementing distances of one (1 ) pixel. These samples are recorded in a data structure, which may be represented as shown in figure 14.
Thereafter, a nonlinear down-sample is performed on the sample values. The down- sample is performed to give greater weight to detail closer to the centre 19 of the circular image region 16. As the distance 20 from the centre 19 increases, the amount of data present also increases. The data is down sampled by averaging samples between distances, for example:
3 pixels from the centre; (1 -3)
4 (4 - 7)
4 (8 - 1 1 )
5 (12 - 16)
7 (17 - 23)
9 (24 - 32)
Therefore pixels between the centre and a radius of 3 pixels are grouped together and the average values of the image at various angles are recorded, values between 3 pixels and 7 pixels are averaged and recorded, and so on.
Alternatively, the circular region may be sampled according to a predetermined pattern (as shown in figure 15). In this embodiment the direction 24 in which the image portion 16 is sampled is also recorded. The samples 25 may be obtained as averages of the underlying higher resolution pixel values of the image portion. This allows the image portion 16 be rotated and matched to a template 18.
The descriptor 18 values are normalized between 0 and 1 in floating point format and thereafter converted to an 8 bit unsigned integer to reduce storage space. This descriptor 18 along with information relating to the size 21 (relative to the size of the image portion 16) and relative position (in the image portion 16) of each detected feature are recorded. The list of descriptors 23 may be stored for comparison later or compared to a previously stored descriptor list. MATCHING
A user may be authenticated when the descriptor list 23a, obtained during an authentication process ("the new list"), is obtained as described above and compared to a descriptor list 23b obtained earlier using the same method and stored ("the old list"). This is referred to in the art as matching. Matching requires that, within a certain threshold and margin for error, the descriptors in the new list match the descriptors in the old list. The process is described below.
A database 22 with previously stored descriptors 18 is provided and may be compiled by generating descriptor lists 23 for users as described above. The new list 23a is obtained from a user 5 being authenticated as described above. The old list 23b associated with the user 5 is retrieved from the database 22. The new list 23a is compared to the old list by comparing a first descriptor 18a(i) and finding the closest descriptor 18b(i) in the old list 23b. Similarly, a second descriptor 18a(ii) is matched to descriptor 18b(ii). This process is repeated for each descriptor 18 in the new list 23a up to and including matching 18a(iii) with 1 8b(iii). The lists 23 need not be exactly the same size and only the closest matches will be used in the subsequent step.
A smaller group (in the current example using three) of matched descriptors (18a and 18b) are chosen randomly and the point set fitting of the centres 19 of the new descriptors 18b are calculated to determine estimate rotation, translation and scaling between the centres in the old list 23b and the new list 23a. Further, a value is calculated representing the error in fitting the points as calculated using the sum of the positional errors whilst compensating with the best estimation for rotation, translation and scale. This process is repeated with random groups of three descriptors from the old and new lists and recorded and analyzed as a probability distribution function using the kernel density estimation method to obtain the closest rotation, translation, and scale estimate. The new descriptors 18a may then be re- matched, taking into account the closest rotation, translation, and scale estimates, to the positionally closest old descriptors 18b. If the percentage of matching descriptors is greater than a predetermined threshold, the user will be authenticated.
It is envisaged that the invention will provide a method and system for eye biometric authentication which is robust and can be performed on consumer devices such as smart phones, tablets or computers with webcams. It is further envisaged that the method and system will provide an alternative to existing biometric methods and systems. It is further envisaged that the method will provide means of extracting portions of the image containing the sclera and/or parts of the vascular structures of the eye with efficiency and accuracy.
The invention is not limited to the precise details as described herein. For example, instead of prompting the user to direct his or her gaze in the appropriate direction using audible prompts, the user may be directed to follow a moving dot on a tablet screen or be given written instructions. Further, instead of using a single image to compile a single descriptor list, a number images of eye(s) may be used.

Claims

1 . A method of authenticating a person comprising the steps of:
obtaining an image which includes an eye of the person in the image; determining the side and direction of gaze of the eye;
extracting a portion of the image;
detecting features in the extracted portion;
comparing the detected features to stored features relating to the person; and
authenticating the person if the detected features match the stored features;
the method being characterized in that
* the step of extracting a portion of the image is preceded by the step of determining the size and position of an iris of the eye and
* the portion of the image is extracted on a side of the iris opposing the direction of gaze with the size and position of the extracted portion of the image being based on the size and position of the iris.
2. The method according to claim 1 wherein the extracted portion includes part of the white of the eye.
3. The method according to claims 1 or 2 wherein the size and position of both irises are determined.
4. The method according to claim 3 wherein the size and position of the irises are used to estimate one or more of the pose, tilt, position, and angles of a head of the person.
5. The method according to claims 1 to 4 wherein an eye is detected prior to detecting the iris.
6. The method according to claims 1 to 4 wherein an eye pair is detected.
7. The method according to claims 1 to 6 wherein the method includes the step of detecting edges in the image portion.
8. The method according to claims 1 to 6 wherein the method includes the step of detecting corners in the image portion.
9. The method according to claim 7 wherein the edges are evaluated to locate partial circles in the image portion.
10. The method according to any of the preceding claims wherein the iris is detected by
evaluating average intensity of pixels on arcs of increasing radial distance from a source pixel being evaluated;
increasing the radial distance from an initial distance to a final distance from the source pixel; recording the average intensities of pixels on each arc between the initial distance and the final distance;
changing the ellipticality of the arcs and repeating the previous steps; evaluating the recorded average intensities to find an increase in intensity by evaluating the derivative of the average intensities against the distance from the evaluated pixel and finding a maxima therein;
comparing the found maxima of source pixels to each other and recording the position and distance of a highest maxima among the source pixels; and
using the source pixel with the highest local maxima as the position of the iris and using the distance from the source pixel to the radial distance at which the maxima occurs as the size of the iris.
1 1 . The method of claim 10 wherein the radial distance is measured in pixels.
12. The method of claims 10 or 1 1 wherein the arcs are portions of a perimeter of an ellipse, with a height equal to the radial distance, between an upper angle and a lower angle.
13. The method according to claim 12 wherein the ellipse is a circle.
14. The method according to claim 12 wherein the angles are measured from a horizontal, the upper angle is between the horizontal and an upper extremity of the arc, and the lower arc angle is between the horizontal and the lower extremity of the arc.
15. The method according to claims 12 to 14 wherein the angles are between 10 degrees and 60 degrees.
16. The method according to claims 10 to 15 wherein the ellipticality of the arcs are changed by changing the ratio of the width to the height of an ellipse of which the arc forms part of.
17. The method according to claim 16 wherein the ratio is changed between a value of 1 , wherein the width and height of the ellipse are equal, and 0.5 wherein the width is half of the height of the ellipse.
18. The method according to any of the preceding claims wherein a rectangular portion on a side opposing the gaze of the person is extracted and an inner edge of the rectangle is at a distance equal to half the height of the iris, an outer edge at a distance of twice the height from the inner edge, and upper and lower edges being separated by a distance equal to the height.
19. The method according to any of the preceding claims wherein the method includes the step of extracting circular image portions around the detected features.
20. The method according to any of the preceding claims wherein descriptors are generated to describe the features.
21 . The method according to claim 20 wherein the descriptors are stored as intensity values including a location and size of the feature relative to the image portion.
The method according to claim 20 or 21 wherein the method includes the step of compiling a descriptor list relating to the person.
23. The method according to any of the preceding claims wherein the person is authenticated if the number of descriptors in a first list matching descriptors in a second list is above a predetermined threshold.
A system for eye biometric authentication comprising:
an electronic device including a processor and memories;
the device including a capturing device for capturing a series of images; and
stored instructions for executing the method of authenticating a person of any one of the preceding claims.
A method of detecting an eye in an image comprising the steps of:
providing a number of images of facial features;
each image including the relative location of an iris of the eye; detecting partial circles in the image by:
* locating points having brightness gradients greater predetermined threshold in the image;
* projecting lines from a number of proximate points, * the lines include information relating to the distance from the points;
* determining intersections of portions of lines at proximate distances;
* calculating a centre and size of a partial circle formed by the points; comparing the image portions in circles constructed from the partial circles with the images of facial features; and
calculating the position of the eye from the relative locations of the iris relating to the images of the facial features.
26. The method of claim 25 wherein the method includes the step of compiling a number of images of facial features.
27. A method of authenticating a person substantially as described herein and as illustrated in the accompanying drawings.
PCT/IB2014/065366 2013-10-16 2014-10-16 Method of authenticating a person WO2015056210A2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
ZA2013/07699 2013-10-16
ZA201307699 2013-10-16
ZA201404229 2014-06-09
ZA2014/04229 2014-06-09

Publications (2)

Publication Number Publication Date
WO2015056210A2 true WO2015056210A2 (en) 2015-04-23
WO2015056210A3 WO2015056210A3 (en) 2015-11-26

Family

ID=51866287

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2014/065366 WO2015056210A2 (en) 2013-10-16 2014-10-16 Method of authenticating a person

Country Status (1)

Country Link
WO (1) WO2015056210A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10460446B2 (en) 2017-10-16 2019-10-29 Nant Holdings Ip, Llc Image-based circular plot recognition and interpretation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6665426B1 (en) 2002-01-29 2003-12-16 West Virginia University Research Corporation Method of biometric identification of an individual and associated apparatus
US7327860B2 (en) 2005-05-04 2008-02-05 West Virginia University Conjunctival scans for personal identification
US8369595B1 (en) 2012-08-10 2013-02-05 EyeVerify LLC Texture features for biometric authentication

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6144754A (en) * 1997-03-28 2000-11-07 Oki Electric Industry Co., Ltd. Method and apparatus for identifying individuals

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6665426B1 (en) 2002-01-29 2003-12-16 West Virginia University Research Corporation Method of biometric identification of an individual and associated apparatus
US7327860B2 (en) 2005-05-04 2008-02-05 West Virginia University Conjunctival scans for personal identification
US8369595B1 (en) 2012-08-10 2013-02-05 EyeVerify LLC Texture features for biometric authentication

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10460446B2 (en) 2017-10-16 2019-10-29 Nant Holdings Ip, Llc Image-based circular plot recognition and interpretation
US11688060B2 (en) 2017-10-16 2023-06-27 Nant Holdings Ip, Llc Image-based circular plot recognition and interpretation

Also Published As

Publication number Publication date
WO2015056210A3 (en) 2015-11-26

Similar Documents

Publication Publication Date Title
US9836643B2 (en) Image and feature quality for ocular-vascular and facial recognition
US9785823B2 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
US9922238B2 (en) Apparatuses, systems, and methods for confirming identity
JP7242528B2 (en) Systems and methods for performing fingerprint user authentication using images captured using mobile devices
US9864756B2 (en) Method, apparatus for providing a notification on a face recognition environment, and computer-readable recording medium for executing the method
KR100374708B1 (en) Non-contact type human iris recognition method by correction of rotated iris image
US9710691B1 (en) Touchless fingerprint matching systems and methods
Kawulok et al. Precise multi-level face detector for advanced analysis of facial images
US10922399B2 (en) Authentication verification using soft biometric traits
JP5377580B2 (en) Authentication device for back of hand and authentication method for back of hand
Ng et al. An effective segmentation method for iris recognition system
WO2015056210A2 (en) Method of authenticating a person
Amjed et al. Noncircular iris segmentation based on weighted adaptive hough transform using smartphone database
Jonsson et al. Learning salient features for real-time face verification
US11544961B2 (en) Passive three-dimensional face imaging based on macro-structure and micro-structure image sizing
KR101887756B1 (en) System for detecting human using the projected figure for eye
Dixit et al. SIFRS: Spoof Invariant Facial Recognition System (A Helping Hand for Visual Impaired People)
Devi et al. REAL TIME FACE LIVENESS DETECTION WITH IMAGE QUALITY AND TEXTURE PARAMETER.
Chiara Design and Development of multi-biometric systems
Galdi Design and development of multi-biometric systems
TW201909032A (en) Finger vein identification method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14793897

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14793897

Country of ref document: EP

Kind code of ref document: A2