US20180322343A1 - Collecting and targeting marketing data and information based upon iris identification - Google Patents

Collecting and targeting marketing data and information based upon iris identification Download PDF

Info

Publication number
US20180322343A1
US20180322343A1 US16/036,023 US201816036023A US2018322343A1 US 20180322343 A1 US20180322343 A1 US 20180322343A1 US 201816036023 A US201816036023 A US 201816036023A US 2018322343 A1 US2018322343 A1 US 2018322343A1
Authority
US
United States
Prior art keywords
iris
person
image
electronic
illuminator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/036,023
Inventor
Steven N. Perna
Mark A. Clifton
Jongjin KIM
Bobby S. Varma
Stephen J. Piro
Barry E. Mapen
Kevin P. Richards
David Alan Ackerman
Ann-Marie Lanzillotto
David J. Wade
Timothy J. Davis
Michael P. Fleisch
Jitendra J. Bhangley
Glen J. Van Sant
John Timothy Green
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Princeton Identity Inc
Original Assignee
Princeton Identity Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/509,356 external-priority patent/US9836647B2/en
Application filed by Princeton Identity Inc filed Critical Princeton Identity Inc
Priority to US16/036,023 priority Critical patent/US20180322343A1/en
Assigned to SRI INTERNATIONAL reassignment SRI INTERNATIONAL ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: FLEISCH, MICHAEL P., PIRO, STEPHEN J., KIM, Jongjin, PERNA, STEVEN N., ACKERMAN, DAVID ALAN, BHANGLEY, JITENDRA J., CLIFTON, MARK A., DAVIS, TIMOTHY J., LANZILLOTTO, ANN-MARIE, MAPEN, BARRY E., RICHARDS, KEVIN P., VAN SANT, GLEN J., VARNA, BOBBY S., WADE, DAVID J.
Assigned to PRINCETON IDENTITY, INC. reassignment PRINCETON IDENTITY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SRI INTERNATIONAL
Assigned to PRINCETON IDENTITY, INC. reassignment PRINCETON IDENTITY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GREEN, JOHN TIMOTHY
Publication of US20180322343A1 publication Critical patent/US20180322343A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06K9/00604
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • G06Q30/0251Targeted advertisements
    • G06Q30/0269Targeted advertisements based on user profile or attribute
    • G06Q30/0271Personalized advertisement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5838Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
    • G06F17/30256
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • G06K9/0061
    • G06K9/00617
    • G06K9/2027
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • G07C9/00158
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C9/00Individual registration on entry or exit
    • G07C9/30Individual registration on entry or exit not involving the use of a pass
    • G07C9/32Individual registration on entry or exit not involving the use of a pass in combination with an identity check
    • G07C9/37Individual registration on entry or exit not involving the use of a pass in combination with an identity check using biometric data, e.g. fingerprints, iris scans or voice recognition
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10TTECHNICAL SUBJECTS COVERED BY FORMER US CLASSIFICATION
    • Y10T70/00Locks
    • Y10T70/60Systems
    • Y10T70/625Operation and control

Definitions

  • iris recognition-based biometric devices impose strict requirements on the iris image capture process in order to meet the needs of iris biometric analysis. For example, many existing devices can only utilize images that have a clear, straight-on view of the iris. In order to obtain such images, existing devices typically require the human subject to be stationary and located very near to the iris image capture device.
  • FIG. 1 depicts a simplified block diagram of at least one embodiment of an iris processor for biometric iris matching, including a pre-processor as disclosed herein;
  • FIG. 2 depicts a simplified block diagram of at least one embodiment of the pre-processor of the iris processor of FIG. 1 ;
  • FIG. 3A depicts a simplified graphical plot illustrating an effect of camera illumination on pupil and iris intensity as disclosed herein;
  • FIG. 3B depicts an illustration of a result of the operation of the pre-processor of FIG. 2 ;
  • FIG. 3C depicts an illustration of another result of the operation of the pre-processor of FIG. 2 , with an alternate image
  • FIG. 3D depicts a simplified illustration of yet another result of the operation of the pre-processor of FIG. 2 , with yet another alternate image;
  • FIG. 4A depicts a simplified flow diagram for at least one embodiment of a method for edge detection, which may be performed by the iris processor of FIG. 1 ;
  • FIG. 4B shows simplified examples of candidate pupil contour curves as disclosed herein
  • FIG. 4C depicts a simplified flow diagram for at least one embodiment of a method for corneal distortion correction, which may be performed by the iris processor of FIG. 1 ;
  • FIG. 4D illustrates a simplified result of correction for foreshortening as disclosed herein;
  • FIG. 5 depicts a simplified block diagram of at least one embodiment of a coding processor as disclosed herein;
  • FIG. 6 depicts a simplified example of at least one embodiment of a multiresolution iris code as disclosed herein;
  • FIG. 7 depicts a simplified block diagram of at least one embodiment of a matching processor as disclosed herein;
  • FIG. 8 depicts a simplified example of at least one embodiment of a process for matching iris codes, which may be performed by the matching processor of FIG. 7 ;
  • FIG. 9 is a simplified schematic depiction of a coarse-fine algorithm to estimate flow-field of an iris code, as disclosed herein;
  • FIG. 10 is a simplified flow diagram depicting at least one embodiment of a method for estimating flow field between two iris codes, as disclosed herein;
  • FIG. 11 is a simplified flow diagram depicting at least one embodiment of a method for estimating flow field between two iris codes as disclosed herein;
  • FIG. 12 depicts a simplified schematic diagram of at least one embodiment of a computer system for implementing the iris processor of FIG. 1 , as disclosed herein;
  • FIG. 13 illustrates at least one embodiment of the iris processor of FIG. 1 in an exemplary operating scenario, as disclosed herein;
  • FIG. 14 is a simplified assembled perspective view of at least one embodiment of an iris biometric recognition module
  • FIG. 15 is an exploded perspective view of the iris biometric recognition module of FIG. 16 ;
  • FIG. 16 is a simplified schematic diagram showing components of an iris biometric recognition module and an access control module in an environment of the access control assembly of FIG. 14 ;
  • FIG. 17 is a simplified flow diagram of at least one embodiment of a method for performing iris biometric recognition-enabled access control as disclosed herein, which may be performed by one or more components of the iris biometric recognition module of FIG. 13 ;
  • FIG. 18 is a simplified block diagram of at least one embodiment of a system including an iris biometric recognition module as disclosed herein;
  • FIG. 19 is a simplified view of at least one embodiment of an iris biometric recognition enabled access control assembly in an exemplary operating environment (i.e., a mobile device).
  • FIG. 20 is an exemplary flowchart depicting a method of collecting and targeting market data and information based upon iris identification.
  • FIGS. 1-13 relate to subject matter that is shown and described in U.S. Utility patent application Ser. No. 14/100,615, filed Dec. 9, 2013, and U.S. Utility application Ser. Nos. 14/509,356 and 14/509,366, both filed Oct. 8, 2014.
  • FIG. 1 depicts a block diagram of an iris processor 100 for biometric iris matching in accordance with exemplary embodiments of the present invention.
  • the iris processor 100 comprises a pre-processor 102 , a coding processor 104 and a matching processor 106 .
  • the iris processor 100 receives images as input, for example, input image 101 and outputs a matched iris 108 from a remote or local database.
  • the database may be accessed as a “cloud” service, directly through an internet connection, or the like.
  • the pre-processor 102 , the coding processor 104 and the matching processor 106 may execute on a single device (e.g., within a software application running on, for example, a mobile device, having captured the images via a camera and/or means of illumination integrated into the mobile device), or on different devices, servers, cloud services or the like, as indicated by the dashed outline of the iris processor 100 .
  • the iris processor 100 may be modular and each processor may be implemented, e.g., on a single device, multiple devices, in the cloud as a service. Any of the components, e.g., the pre-processor 102 , the coding processor 104 , and the matching processor 106 , may be implemented or used independently of one another.
  • the input image 101 is an infrared image, and is captured by an infrared capture device (not shown in FIG. 1 ), coupled to the iris processor 100 .
  • the infrared capture device may be any type of infrared capture device known to those of ordinary skill in the art.
  • the input image 101 is a red, green, blue (RGB) image, or the like.
  • the input image 101 contains an eye with an at least partially visible iris and pupil and the iris processor 100 attempts to match that eye with an iris of an eye image in a local or remote database of eye images.
  • irises are matched based on Hamming distances between two coded iris images.
  • the input image 101 is processed by the pre-processor 102 .
  • the pre-processor 102 segments and normalizes the iris in the input image 101 , where input image 101 may have variable iris/pupil and iris/sclera contrast, small eyelid openings, and non-frontal iris presentations.
  • the result of the pre-processor 102 is a modified iris image with clearly delineated iris boundaries and synthesized quasi-frontal presentation. For example, if the iris in the input image 101 is rotated towards the left, right, up or down, the pre-processor 102 will synthesize an iris on the input image 101 as if it was positioned directly frontally. Similarly, a frontally positioned pupil will be synthesized on the skewed or rotated pupil of the input image 101 .
  • the coding processor 104 analyzes and encodes iris information from the iris image generated by the pre-processor 102 at a range of spatial scales so that structural iris information contained in the input image 101 of varying resolution, quality, and state of focus can be robustly represented.
  • the information content of the resulting code will vary depending on the characteristics of input image 101 .
  • the code generated by the coding processor 104 representing the input image 101 allows spatial interpolation to facilitate iris code alignment by the matching processor 106 .
  • the output code from the coding processor 104 is coupled to the matching processor 106 .
  • the matching processor 106 incorporates constrained active alignment of iris structure information between stored iris images and captured iris codes generated from the input image 101 to compensate for limitations in iris image normalization by the pre-processor 102 .
  • the matching processor 106 performs alignment by performing local shifting or warping of the code to match the generated code with a stored iris code template based on estimated residual distortion of the code generated by the coding processor 104 . According to some embodiments, a “barrel shift” algorithm is employed to perform the alignment. Accordingly, structural correspondences are registered and the matching processor 106 compares the aligned codes to determine whether a match exists. If a match is found, the matching processor returns matched iris data 108 .
  • the matched iris data 108 may be used in many instances, for example, to authenticate a user in order for the user to gain access to a secure item (e.g., a safe, safety deposit box, computer, etc.), authenticate a user to access applications or wireless communication within a computing device, such as a mobile device (e.g., authenticating a user in order to transmit instructions from the user's mobile device to an automated teller machine (ATM) in order to complete a financial transaction), authorize financial transactions and/or collect, analyze and display an identify of a user in order to deliver targeted marketing to the user, as described in detail below.
  • a secure item e.g., a safe, safety deposit box, computer, etc.
  • ATM automated teller machine
  • the pre-processor 102 may be an application executing on any device, for example, a mobile device, such as a mobile phone, camera, tablet, forward or rear facing camera integrated into a mobile phone or tablet, a display or marquee, or the like.
  • the pre-processor 102 on the device may capture an image of a user's eye using the camera of the device, perform the pre-processing steps on the device, and then transmit a bundled and encrypted request to the coding processor 104 , which may be accessed via a cloud service on a remote server.
  • the application may be part of or associated with a display or marquee, which may comprise the coding processor 104 and the iris coding is performed at the display or marquee.
  • the iris processor 100 may be used, for example, for collecting and targeting of marketing data based upon iris identification. For example, a customer in a grocery store can be detected and their iris can be stored in a local or remote database. If the customer enters the grocery store again, or an associated store with which the iris information is shared, the store can build a profile of the customer, the items they most often purchase, peruse, or the like by using iris detection and gaze tracking. These marketing profiles can be used by the store itself for product placement, or may be used by third party marketing services as marketing data. In other embodiments, the customer profile can be matched with identifying information, and when the customer uses a website affiliated with the store, or a website, which has access to the iris data, the website identifies the customer and offers targeted marketing to the customer.
  • the iris processor 100 may be used to collect iris biometric authentication data from, as well as display targeted marketing data to, specific users.
  • the iris biometric recognition module 1514 may be positioned in a public area (e.g., as an electronic billboard or marquee) such as a bus, subway stop, or any other public area, possibly incorporated into a camera or video advertisement display, as non-limiting examples, and collect biometric iris data from each subject (e.g., a passerby or a subject in the vicinity of the advertisement) from which the iris biometric recognition module 1514 is able to collect iris biometric data, possibly to be used as templates, for each subject passing or in the vicinity of the camera or video advertisement display.
  • a public area e.g., as an electronic billboard or marquee
  • a passerby or a subject in the vicinity of the advertisement possibly able to collect iris biometric data, possibly to be used as templates, for each subject passing or in the vicinity of the camera or video advertisement display.
  • biometric identification templates may be stored in a local, remote and/or cloud database, in association with the subject's identity, if the identity of the person is known.
  • the iris biometric recognition module 1514 and/or hardware (e.g., camera, video advertisement display, illuminator, etc.) and/or software coupled to the iris biometric recognition module 1514 may identify the location of the subject and the time that the subject was recognized, and store this data in association with the biometric identification of the subject.
  • the iris biometric recognition module 1514 and/or the coupled hardware and/or software may identify specific applications accessed by a user, a time when the mobile device was unlocked and/or when the application was accessed, or any other information that would track habits of a user.
  • Additional data may be associated with the subject or user as well, possibly including, as non-limiting examples, personal habits (e.g., person rides the bus or subway, person takes the subway every morning or evening at a particular time, person takes the bus every morning or evening at a particular time), or hobbies (e.g., subject enjoys running, jogging or walking).
  • This additional data may be stored in the same database as the biometric iris identification templates or in a separate database.
  • the hardware coupled to the iris biometric recognition module 1514 may include one or more electronic video advertisement displays, possibly set up in public places such as airports, malls or shopping centers, stadiums, subways, bus terminals, casinos, amusement parks, childcare facilities, cruise ships, detoxification centers, drug testing collection centers, entertainment facilities, health clubs, gyms, spas, hospitals, hotels, motels, medical labs or facilities, on-sit or off-site testing facilities, pharmacies, ski lifts, sporting events or centers, tradeshows, conferences, conventions, transit centers, or any other public place(s).
  • the camera and illuminator, coupled to the iris biometric recognition module 1514 and described below may be integrated into the electronic video advertisement display. In other embodiments, the camera and/or illuminator may be mounted on the electronic video display advertisement.
  • the system Upon capturing the iris of a subject, the system attempts to match the iris of the subject with iris identification templates in the database. Data associated with the subject may be accessed upon matching of the iris with an iris identification template, and advertisements appropriate for the subject may be displayed on the electronic video display. For example, if it is determined that a subject or group of subjects, each of whom takes the subway each morning at 7 AM are all interested in running or jogging, the electronic video display may include advertisements for running shoes at that particular time. Additional embodiments may exist where an electronic advertisement specific to the subject is displayed on any screen, including airplane screens, gas station screens, and/or any other public or private screen.
  • the system may additionally include logic to determine different categories of subjects that may be in viewing distance from an electronic video display during different periods of time throughout an hour, day, month, year, or any other suitable period of time.
  • the logic may select different advertisements from a selection or database of advertisements based on the category of subjects in viewing distance from the electronic display at a particular time.
  • different versions of the same advertisement e.g., an apparel store advertisement with different apparel on the different advertisements based on different categories of subjects
  • iris biometric recognition module 1514 may customize the user experience to each recognized user. So, for example, if a particular user always unlocks the mobile device (possibly at a certain time of day), and that user is authenticated via the biometric recognition module 1514 , the user experience may be customized to that user. Likewise, specific applications accessed by that user may be provided and/or may provide customized advertising (e.g., in-app advertising specific to the user, such as running shoe advertisements if the user enjoys running or jogging).
  • advertising e.g., in-app advertising specific to the user, such as running shoe advertisements if the user enjoys running or jogging.
  • the user experience may be customized in a first fashion for a first user, for example, a parent may be allowed access to particular applications, and the user experience may be customized in a second fashion for a second user, for example, a child may have access to different applications or a subset of applications.
  • the iris biometric recognition module 1514 may be used to determine a presence of subject at a specific time, verify the presence of the person in close proximity to the video display device, determine which advertisements receive the most attention from the user, and the like. Analysis of this collected data may be used to guide marketing decisions.
  • an iris scanning system may be installed in a public location, for example, co-located with an advertisement (Block 2200 ) or on, for example, a mobile device.
  • an advertisement for example, co-located with an advertisement (Block 2200 ) or on, for example, a mobile device.
  • iris images may be collected (Block 2204 ). These images may be compared to a historical database (e.g., templates in a historical database) (Block 2206 ). If no match for the iris is found, the hardware/software may collect and store data and/or other information, and there would be no change in the advertisement (Block 2208 ).
  • a historical database e.g., templates in a historical database
  • the advertisement may be tailored to the individual (Block 2210 ), if possible, and the tailored advertisement may be displayed and additional subject data and/or information may be collected (Block 2212 ). If it is not possible to tailor the advertisement to the individual, the advertisement will not be changed and additional subject data and/or information for the individual may be collected (Block 2208 ).
  • the iris processor 100 may be used to determine whether a person accessing particular medical resources, such as medicine, devices, or the like, are permitted to access these resources.
  • the iris processor 100 can be coupled with a recording device, which captures video of those accessing a medicine cabinet, for example, and whether they are authorized to take medical resources from the cabinet.
  • the iris processor 100 may be used as a security system and authentication device by a small company with limited resources. By simply coupling a camera or other image capturing device to an electro/mechanical locking system, the company can limit access to doors, offices, vaults, or the like, to only authorized persons.
  • the iris codes produced by the coding processor 104 can be used to authorize, for example, airline boarding passes. On purchase of a travel (airline, train, bus, etc.) ticket, the coding processor 104 generates an iris code of the purchaser and saves the iris code for imprinting on the boarding pass.
  • the carrier may invoke the matching processor 106 to match the iris code on the boarding pass with the iris code produced by the traveler presenting the boarding pass. If there is a match, the traveler is allowed to board the bus, train or airplane.
  • the iris processor may be used in any context in which the user needs to be authenticated, including any situation in which the user wants physical or electronic access to a device or data accessible via the device.
  • FIG. 2 depicts a block diagram of the pre-processor of the iris processor 100 in accordance with exemplary embodiments of the present invention.
  • the pre-processor receives the input image 101 and outputs a rectified iris image 220 .
  • the rectified iris image 220 corrects for uncontrolled capture scenarios such as ambient illumination conditions, varied illumination geometries, reduced eyelid opening area, presentation angle (obliquity), or the like.
  • the rectified iris image 220 corrects for various nonconformities.
  • the pre-processor 200 comprises a segmentation module 202 and a correction module 204 .
  • the segmentation module 202 further comprises a pupil segmentation module 206 , an iris segmentation module 208 and an edge detection module 209 .
  • the segmentation module 202 corrects an input image for low-contrast pupil and iris boundaries.
  • the image produced by the segmentation module 202 is then coupled to the correction module 204 for further correction.
  • the correction module 204 comprises a tilt correction module 210 and a corneal correction module 212 . The details of the segmentation module 202 are described below.
  • FIG. 3A illustrates that varying illumination geometry produces varying pupil appearance.
  • FIG. 3A illustrates measurement of pupil-iris intensity difference as a function of distance, e.g., 1 and 2 meters, pupil size, e.g., 2.4 mm and 4.0 mm, and camera/illuminator distance, e.g., 6 to 16 cm. As the camera/illuminator distance increases, the pupil iris intensity decreases. The contrast of the pupil varies greatly as a function of distance between camera and subject as well as functions of illuminator geometry and pupil diameter. The variation with distance is due to the fact that the angular distance between the illuminator and camera axes are greater at short range (e.g., 1 m) than at longer distances.
  • the segmentation module 202 and the correction module 204 may be used, for example, in the medical field, in targeted marketing, customer tracking in a store, or the like.
  • pupil and iris insertion may be performed by the pre-processor 102 , as described further with respect to FIGS. 2 and 3A-3D , in the medical field as a diagnostic tool for diagnosing diseases that a person might have based on their iris profiles.
  • FIG. 3B illustrates an example of iris and pupil boundary matching in accordance with exemplary embodiments of the present invention.
  • iris diameters are normalized by the iris segmentation module 208 . Size normalization is performed using a range estimate derived from an autofocus setting of the camera taking the image.
  • the image 300 shows the pupil boundary 304 calculated by the pupil segmentation module 206 .
  • the pupil segmentation module 206 then inserts an artificial dark pupil in the pupil boundary 304 in image 300 .
  • Image 300 is then coupled to the iris segmentation module 208 , which calculates the iris boundary.
  • FIGS. 3C and 3D illustrate examples of inserted artificial pupils and iris boundaries.
  • input image 320 is coupled to the pre-processor 200 .
  • the input image 320 is then segmented by pupil segmentation module 206 to calculate a pupil boundary region 326 .
  • the pupil segmentation module then inserts an artificial black colored pupil in the pupil boundary region 326 .
  • oblique irises and pupils are warped to be circular.
  • the insertion of an artificial pupil in the pupil boundary region 326 may be used, for example, to remove red-eye effects in an image captured by a camera.
  • the segmentation module 202 can be used to segment the pupil and iris areas, and the pupils may be red-eye corrected by insertion of the artificial pupil. This process of segmentation and warping is described in more detail below.
  • FIG. 3D shows a similar process but on a downward facing iris in image 350 .
  • the pupil boundary 356 is still detected despite being occluded by the eyelid in image 352 .
  • the pupil and iris are both warped to form circular regions to aid in segmentation.
  • the pupil segmentation module 206 inserts a black disk/artificial pupil in the image 352 and couples the image 352 to the iris segmentation module 208 .
  • the iris segmentation module 208 determines an iris boundary 358 .
  • the iris and pupil boundaries are corrected for various lighting conditions and presented in image 354 , where region 360 can be seen with the artificial pupil.
  • the artificial pupil need not be necessarily black and may be another suitable color, based on compatibility with third party iris recognition software.
  • the pupil boundaries for example, 304 , 326 and 356 and the iris boundaries (iris/sclera boundary areas), for example, 306 , 328 and 358 are calculated using a Hough transform, according to one embodiment.
  • the pupil segmentation module 206 and the iris segmentation module 208 employ edge detection using the edge detection module 209 to generate edge maps which works for varying scales of grayscale pupils, even in instances with low edge contrast.
  • the pupil segmentation module 206 determines the segmented pupil area (and therefore, the pupil contour) and the pupil and iris have been warped to form circular regions, the segmented pupil area is replaced with a black or dark disk to simulate the appearance of a dark pupil.
  • FIG. 4A depicts a flow diagram for a method 400 for edge detection in accordance with one embodiment of the present invention.
  • the method 400 is an exemplary illustration of the operation of the edge detection module 209 used to detect pupil and iris boundaries.
  • an edge map is generated from an image of an eye, for example, input image 101 .
  • An exemplary edge map for an iris image which was brightly illuminated is shown in FIG. 48 , image 420 .
  • Image 422 is an edge map for an iris image which was not as brightly illuminated, i.e., an indistinct pupil whose edges are not as clearly visible as those in image 420 .
  • step 406 candidate pupil contours are constructed for the given edge map.
  • Step 406 consists of sub-steps 406 A and 4068 .
  • a first candidate pupil contour is created from a best fitting circle, as shown in FIG. 48 , image 420 .
  • a Hough transform or RANSAC (random sample consensus) method can be used to find the circle that has the greatest level of support in the edge map in the sense that the largest fraction of circle points for that circle coincide with edge points.
  • a second candidate pupil contour is constructed from a best inscribed circle as shown in FIG. 48 , image 422 .
  • an inscribed circle is a circle that can be drawn in an area/region of the edge map so that no edge points (or no more than a specified small number of edge points) lie within the circle.
  • the best inscribed circle is the largest such inscribed circle that can be found in the area/region of the pupil. Then method then proceeds to step 408 , where the method 400 determines the best matching candidate pupil contour from the first and second candidate pupil matching contours for the edge map.
  • the best match is determined by assessing a level of support for the best fitting circle and selecting the best fitting circle as the best match if this level of support is above a threshold value. The best inscribed circle is selected as the best match if the level of support for the best fitting circle is below a threshold value.
  • an automatic process based on how well the best fit contour (circle) matches the edge contour in the edge contour map is used to decide which candidate contour to choose. For example, for the best supported circle described above, a subset of edge points can be selected that is limited to those edge points whose angular orientation is consistent with that edge point being a part of the candidate circle. In other words only edge points whose direction is approximately perpendicular to the direction from the estimated center of the candidate circle are included. This process eliminates from consideration those edge points that may accidentally fall at the correct position to be part of the circle but that do not correspond to the actual circle contour. If the proportion of such selected edge points is greater than some specified fraction (e.g.
  • the level of support for that circle is deemed to be sufficient and the best fitting circle is selected. If the level of support by the selected edge points is less than this threshold then the best fitting circle is deemed to have insufficient support and the best inscribed circle is selected instead.
  • the best fit candidate contour will provide accurate pupil segmentation in the bright pupil image, as shown in FIG. 48 , image 420 , where the bright colored eye edge map is overlayed with the best-inscribed circle 430 and the best fitting circle 432 . The method then terminates at step 412 when a best matching candidate pupil contour is found.
  • iris images may be captured over a range of oblique viewing conditions, for example, where gaze deviation with nasal gaze angles ranges from 0 to 40 degrees, as shown in FIG. 3D .
  • the tilt correction module 210 rectifies the images for this tilt and generates a tilt corrected image.
  • a tilt-corrected image may be generated by estimating or determining the magnitude and direction/angle of tilt, and then applying a geometric transformation to the iris image to compensate for the oblique viewing angle.
  • the simplest form of this transformation is a stretching of the image in the direction of the tilt to compensate for the foreshortening caused by the angle between the iris and the image plane.
  • Such a non-isotropic stretching is mathematically represented as an affine transformation.
  • a more accurate version of this geometric de-tilting replaces the affine transformation with a projective transformation which better represents the image representation of a pattern on a flat, tilted surface.
  • the correction module 204 has several uses independent of the other components of the iris processor 100 .
  • the correction module 204 may be used to detect a person's gaze, or to track a person's gaze continuously by capturing one or more frames of a person's eyes.
  • the tilt correction module 210 may, for example, be used to continuously track a user's gaze on a mobile device and scroll a document, perform a swipe or the like. This tilt detection can be used, for example, independently of the matching processor 106 described in FIG. 1 to enable or disable the display of a mobile device.
  • the correction module 204 corrects the input image 101 prior to the segmentation module establishing artificial pupil discs on the input image 101 .
  • tilt correction may still show distortions such as the apparent eccentric pupil compression of the nasal portion of the iris, causing difficulty in biometrically matching the iris with a stored iris image.
  • the distortion is caused by the optical effect of the cornea and anterior chamber of the human eye through which the iris is imaged. These two structures have similar refractive indexes (1.336 for the aqueous humor that fills the anterior chamber and 1.376 for the cornea) so that together their optical effect is approximately that of a single water-filled plano-convex lens in contact with the iris.
  • the tilt corrected image generated by the tilt correction module 210 is coupled to the corneal correction module 212 , which corrects for the above described corneal distortion.
  • FIG. 4C depicts a flow diagram for a method 440 for corneal distortion correction in accordance with exemplary embodiments of the present invention.
  • the method 400 is an exemplary illustration of the operation of the edge detection module 209 .
  • the method begins at step 402 and proceeds to step 404 .
  • the tilt correction module 210 estimates the angle of tilt of the iris with respect to the camera orientation.
  • the tilt can be estimated roughly by finding the pupil center and measuring the distance between that center and the bright reflection in the cornea caused by the near infra-red illuminator used in iris imaging.
  • Other methods of tilt estimation known to those of ordinary skill in the art may also be used. Indeed, any method of tilt estimation may be substituted herein.
  • step 406 the image is corrected for the perspective distortion, i.e., the foreshortening of the iris that occurs.
  • the effect of foreshortening can be approximated as a simple compression of the captured image in the direction or tilt. This effect can therefore be compensated for by simply stretching the image in the direction derived from the tilt estimation step.
  • a more accurate correction can also be performed by using a projective transformation to more precisely capture the foreshortening effect.
  • the method 400 corrects for effects of optical distortion due to viewing through the tilted cornea.
  • approximate correction for the optical distortion discussed above can be achieved by measuring and correcting the effects of pupil eccentricity and pupil elongation.
  • the method terminates at step 450 .
  • the pupil still appears shifted to the left with respect to the center of the iris and the pupil appears elongated in the horizontal direction.
  • the corneal correction module 212 corrects for these distortions without modeling the optical elements that produced them by non-linearly warping the iris area/region to force the iris contour 466 and pupil contour 468 to become concentric circles.
  • the corneal correction module 212 creates this nonlinear warping function by defining a set of spokes 470 that connect points on the non-circular pupil contour 468 to corresponding points on the non-circular iris/sclera contour 466 and mapping each spoke of the spokes 470 to a position connecting a synthetic circular pupil contour 472 to a concentric circular iris/sclera contour 474 .
  • the described transformation is then applied to the underlying image 460 .
  • the result of this mapping (with appropriate interpolation) is shown in image 476 . After the pupil and iris areas/regions have been shifted to be in concentric circles, the coding process can be more accurately performed with better matching results.
  • iris coding and matching can be performed using any desired iris biometric algorithm designed to be applied to iris images captured under standard controlled conditions.
  • any desired iris biometric algorithm designed to be applied to iris images captured under standard controlled conditions.
  • Daugman Digits, J., “High confidence visual recognition of persons by a test of statistical independence”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 15 (11), pp 1148-1161 (1993)
  • methods developed by others can also be used, including but not limited to those of Munro (D. M. Monro and D. Zhang, An Effective Human Iris Code with Low Complexity, Proc. IEEE International Conference on Image Processing, vol. 3, pp. 277-280, September 2005) and Tan (Tan et al, Efficient Iris Recognition by Characterizing Key Local Variations IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 6, JUNE 2004 ).
  • FIG. 5 depicts a block diagram of a coding processor 500 in accordance with exemplary embodiments of the present invention.
  • the coding processor 500 comprises a coordinate module 502 and an extraction module 506 .
  • the coordinate module 502 constructs an invariant coordinate system for an invariant coordinate system image representation that allows iris information extracted from varying iris images to be brought into register, so that corresponding spatial information can be compared.
  • the extraction module 506 extracts information from the iris image for supporting a strong rejection of the hypothesis that two eye images presented represent statistically independent patterns.
  • the coding processor 500 prepares the segmented and corrected iris image 220 for accurate matching with other iris images and allows unconstrained iris capture applications.
  • image size and focus may vary with distance, in addition to individual iris structure variations and variation with illumination wavelength of spatial information content of an iris structure.
  • iris coding is based on angular frequencies between about 15 and 40 cycles/2pi or 2.5 and 6 pixels per cycle, where according to one embodiment, the present application achieves robust matching based on the codes generated by the coding processor 500 down to approximately 40 pixels per iris diameter.
  • the coding processor 500 uses a variant of Daugman's local phase representation, which encompasses a multi-resolution coding approach rather than choosing a single scale of analysis. Lower frequency components remain available in lower resolution images and are less prone to loss in defocused or otherwise degraded images.
  • the variant of Daugman's local phase representation allows for dense coding that is useful when dealing with iris images in which significant occlusion may occur.
  • Daugman type phase coding approach generates a code that represents all available parts of the iris images. This is in contrast to an approach that uses sparse local features that might be occluded or otherwise unavailable in a particular image to be matches. Further, the use of multiresolution phase approach preserves the possibility of achieving code-level compatibility with existing phase-based representations. In addition to containing multi-scale information, the code that is created can incorporate additional information to facilitate estimation of iris code alignment and spatial interpolation of local structure information prior to comparison.
  • the coding processor 500 comprises the coordinate module 502 .
  • the coordinate module 502 transforms the rectified iris image 220 into a polar iris image 504 .
  • the pupil boundary appears at the top (notice the specular reflection of a biometric scanner illuminator column) and the iris-sclera boundary area appears at the bottom.
  • the angular dimension runs clockwise from 3 o'clock at the left of the image. Proceeding from left to right, the lower and upper eyelids can be seen. Note that in image 504 the eyelashes extend from the upper eyelid all the way into the pupil.
  • the image 504 is coupled to the extraction module 506 that filters and subsamples the polar iris image 504 to produce a multi-resolution iris code representation 520 , an example of which is shown in FIG. 6 .
  • the image 504 is passed through a series of bandpass filters to produce a set of filtered images.
  • FIG. 6 shows an example of a polar iris image 620 , being filtered by filters 121 (Filters 1 . . . 5 ) and producing an iris code 622 comprising filtered bands 600 , 602 , 604 , 606 and 608 , respectively high-frequency domain bands to low frequency domain bands.
  • the five bands shown correspond to Gabor filter (a linear filter used for harmonic analysis, wavelet decompositions, and edge detection) carrier wavelengths of 6, 8, 12, 16, and 24 pixels with respect to a polar image sampled at 200 pixels around the iris. Therefore, the frequencies correspond approximately to angular spatial frequencies of 33, 25, 16, 12, and 8 cycles per 2pi.
  • Gabor filter a linear filter used for harmonic analysis, wavelet decompositions, and edge detection
  • the higher frequencies are comparable to those used in standard iris matching algorithms.
  • the mask 610 is the union of two masks: a mask (common to all bands) based on analysis of the intensities in the input polar iris image 504 that masks off area corresponding to specular reflections and approximate location of eyelid and eyelash areas, and a mask based on the signal strength in the Gabor filtered image that masks off areas in which local phase measurement is unstable (unstable regions).
  • Multi-resolution representation as shown in iris code 622 allow representation of information from images at different camera-subject distances that result in iris images differing in number of pixels per unit distance at the iris as well as oblique camera views causing foreshortening and optical demagnification, as discussed above with reference to FIGS. 2-4D .
  • an iris code representation 520 includes a complete description of the filter characteristics, spatial sampling, representation and quantization.
  • Filter characteristics comprise one or more of center frequencies, bandwidths, functional type (e.g. log Gabor), and orientation tuning.
  • Spatial sampling comprises one or more of spacing along the radial and angular normalized image axes for each filter type, and quantization specifies the number levels with which each value is represented or number of bits assigned to each.
  • the iris code representation 520 and exemplary iris code 622 is a warpable code allowing for interpolation by using sub-Nyquist spatial sampling requirements for each filter 1 . . . 5 in filters 621 that produces provide a criterion for sufficient sampling for accurate interpolation.
  • the sub-Nyquist spatial sampling is combined with a finer intensity quantization than the 1 bit per complex phase component used in Daugman-type coding. For example, if 4 bits are used for each complex phase component this corresponds to roughly 64 steps in phase angle and thus a maximum interpolation error of pi/32 radians or less than six degrees.
  • non-quantized iris codes may also be matched, where original complex band-pass filter outputs are stored without quantization.
  • the filter outputs are normalized in magnitude so that each represents a complex number on the unit circle.
  • Data masks are generated based on occlusions and local complex amplitude.
  • the match measure that is the closest analog of the standard Hamming Distance measure of a Daugman iris code is based on a phase difference histogram. This histogram constructed by computing the angles between the phase vectors of the two codes being compared (see FIG. 6 ), and compiling a histogram (subject to the valid data mask) of phase differences between ⁇ pi and pi. These phase differences should be small if the codes represent the same eye and more or less uniformly distributed if the codes represent statistically independent eyes.
  • FIG. 7 An example of two such histograms is shown in FIG. 7 .
  • the histogram on the left corresponds to an impostor match and the one on the right to an authentic match.
  • the authentic distribution is tightly concentrated around a zero phase shift with only a small proportion of the phase difference values larger than pi/2 in absolute value.
  • the impostor histogram shows many large phase differences and no clear evidence of concentration around zero value.
  • the fraction of values larger than pi/2 can be used to generate a match statistic that behaves very much like Daugman code Hamming distance if this is desired.
  • there are many other measures of central concentration and dispersion that may be used to distinguish between authentic and impostor distributions, as will be described below.
  • give sufficient training sets of impostor and authentic histograms it may be beneficial to use statistical classification or machine learning techniques such as discriminant analysis, Support Vector Machines, Neural Networks, or Logistic Regression to construct an optimal decision procedure for some class of data.
  • data is analyzed over a periodic domain by employing a Fourier series expansion to compute circular harmonics.
  • the relative magnitude low order circular harmonics give information about degree of concentration of the data. Transformation of the histogram data using circular harmonics is beneficial prior to use of learning techniques to construct a decision procedure.
  • phase difference histogram aids in analysis of the match level between two codes but does not represent all of the information relevant to the comparison of two codes. If the phase difference value varies as a function of the absolute phase then the histogram shows low concentration (i.e. large dispersion) even given a strong relationship.
  • a Mutual Information or other conditional entropy description is employed to prevent this problem, which measures the reduction in the entropy of one random variable given knowledge of the value of another random variable. This more complete characterization can detect relatedness even where the variables are uncorrelated.
  • phase difference histogram Another limitation of the phase difference histogram is that it completely suppresses spatial information since the histogram is a global statistic. However, local or patchwise uniformity of phase differences or other detectable relatedness would also be sufficient to conclude that the codes are not independent. This local analysis could be achieved using local histogram analysis, mutual information, or spatial correlation analyses.
  • FIG. 7 depicts a block diagram of a matching processor 700 in accordance with exemplary embodiments of the present invention.
  • the matching processor 106 comprises an alignment module 702 and a flow estimation module 704 .
  • the iris code 520 generated by the coding processor 500 as shown in FIG. 5 is coupled to the alignment module 702 .
  • the alignment module 702 performs various alignments to the iris code 520 based on matching algorithms described below.
  • the alignment module 702 further couples the iris code 520 to the flow estimation module 704 to generate estimated flow vectors to aid in matching.
  • the alignment module 702 compares the iris code 520 to an iris code 706 from database 708 to determine whether a match exists.
  • a match does not exist, more iris codes from the database 708 are compared with the iris code 520 . Match scores are determined, and if the match score meets or is below a predetermined threshold, then a match exists. According to exemplary embodiments, a Hamming distance is used as a match score. Ultimately, the matched iris data 108 is returned by the matching processor 700 . According to some other embodiments, flow estimation is applied to information derived from the unknown iris code 520 and the stored iris code 706 . This information may be part of the iris code 520 per se or it may not. The resulting flow field from the flow estimation module 704 is used to generate a modified iris code that is matched against a reference iris code by the matching processor 700 to produce a match score 720 .
  • a Hamming distance represents a binary distance based on XOR operations to computes the number of bits that differ between two binary images.
  • the alignment module 702 performs a Daugman barrel shift on the iris codes, i.e., finds the iris code rotation that provides the best match between the iris codes being compared.
  • the matching algorithm employed by the matching processor 700 is a modified algorithm using the Hamming distance (HD) for each set of barrel shift positions and taking the lowest Hamming distance as the score for that pair of codes.
  • the unknown code is deemed to be a match. If the HD is above the threshold then the unknown code is labeled an impostor. In one embodiment, the threshold depends on details of the iris code structure and on the statistical requirements of the matching scenario.
  • the modified algorithm employed by the alignment module 702 barrel shifts the iris codes being compared and also locally aligns the iris codes to each other to compensate for inaccuracies in iris image normalization due to uncorrected optical distortion or complexities of iris dilation and contraction.
  • the local alignment function performed by alignment module 702 , allows compensation for distortions in the input iris image that are not uniform across the iris. This is accomplished by shifting local regions of the code to bring them into more accurate alignment with corresponding regions of the reference code.
  • this process is performed using very small estimation regions, virtually any iris code can be made to match any other iris code, which can result in false matches being generated.
  • This false matching problem can be avoided by imposing suitable smoothness conditions on the estimated flow field. For example, if the flow field is estimated by performing local translation estimation using relatively large estimation regions then the local flow estimates will represent the average motion over this relatively large region.
  • the alignment module 702 further produces multiple match scores for each comparison, between iris code 520 and 706 for example, because each iris code contains multiple frequency bands.
  • FIG. 8 depicts the process of matching iris codes performed by the matching processor 700 in accordance with exemplary embodiments of the present invention.
  • the first code 800 and the second code 802 to be matched are represented as values over the rectified (e.g., polarized) iris image coordinate system consisting of an angular and a normalized radial coordinate.
  • a local displacement function or flow field is computed by the flow estimation module 704 of the matching apparatus in FIG. 7 and coupled to the alignment module 702 that best aligns structure in the first iris code 800 to corresponding structure in the second code 802 , subject to some smoothness or parametric constraint.
  • This flow field estimation can include the effect of standard barrel shift alignment or that can be performed as a separate step.
  • the vectors in this flow field each specify the displacement in the normalized image coordinate system at which the image structure in the first code 800 best matches the structure in the second code 802 .
  • Each band in first iris code 800 is transformed using this displacement function to produce an aligned iris code, and the Hamming distance between this aligned iris code and the corresponding band of the second code 802 is computed. Because the transformation is constrained to be smooth, impostor codes will not be transformed into authentic codes as will be described below.
  • the flow estimation module 704 computes a flow field at a reduced resolution for each iris code, and smoothly interpolates the flow field to produce a final estimate.
  • the flow estimation module 704 employs a pyramid-based coarse-fine flow estimation technique, though those of ordinary skill would recognize that other techniques may be used instead.
  • the alignment module 702 introduces a small local shift in one band of each of the first iris code 800 and the second iris code 802 , the shift being in the angular direction and equal at all radial positions. The displacement shift also varies smoothly in the angular direction.
  • a coarse-fine algorithm is used by the flow estimate module 704 to estimate the flow field between codes 800 and 802 from the low resolution bands of the codes.
  • the alignment module 702 then warps the code 800 by the estimated flow field resulting in a significantly decreased Hamming Distance, signaling a high confidence match.
  • a Hamming distance ⁇ 0.3 indicates a high confidence match.
  • Various matches may correspond with different Hamming distance values qualifying as high confidence matches.
  • the matching processor 700 may match two iris codes by employing a mutual information measure based on the phase angles of the codes being compared as well as measures based on the local difference of phase angles.
  • FIG. 9 is a depiction of the coarse-fine algorithm described above to estimate flow-field of an iris code in accordance with exemplary embodiments of the present invention.
  • Coarse-fine refinement operates on a “pyramid” structure that is essentially a collection of bandpass filtered version 904 - 1 to 904 -N and 906 - 1 to 906 - 1 of the input images 900 and 902 respectively, as shown in FIG. 9 .
  • the displacements 908 - 1 to 908 -N estimated at the previous level are used to warp the current level image and then an incremental displacement is computed based on the residual difference between the warped level and the corresponding pyramid level in the other image. This process continues until the highest level is reached and the result is the final estimated flow field 910 .
  • the multi-resolution iris code is itself a collection of bandpass filtered versions of the images with which alignment is desired, according to one embodiment, these bands themselves could be used to drive the alignment process in the alignment module 702 . This would produce a truly “self aligning” iris code. In this approach there is no need to store additional alignment data as part of the multi-resolution iris code structure.
  • FIG. 10 is a flow diagram depicting method 1000 for estimating flow field between two iris codes in accordance with exemplary embodiments of the present invention.
  • the method is an implementation of the flow estimation module 704 .
  • the method begins at step 1002 and proceeds to step 1004 .
  • the flow estimation module 704 generates a first plurality of images from a first input image (i.e., a first iris code) and a second plurality of images from a second input image (i.e., a second iris code to be matched against) using a bandpass filter, the first and second plurality of images comprising images ranging from low frequency to high frequency bands.
  • step 1006 the flow estimation module 704 selects an image from the first plurality of images in the lowest frequency band that has not been processed, i.e., for which there is no previous flow-field estimate.
  • step 1008 the flow estimation module 704 determines whether a flow field has been estimated in a lower frequency band between the first and second plurality of images. If a flow field has been estimated in a lower frequency band, the method proceeds to step 1010 , where the selected image is warped using the lower frequency band flow field estimate.
  • step 1012 a flow field is estimated by the flow estimation module 704 on the residual difference between the warped image and a second image at the same frequency band from the second plurality of images.
  • step 1014 the flow estimation module 704 determines whether all frequency bands have been processed. If not, then the method returns to step 1006 to process the next higher frequency band until all frequency bands have been processed. When all frequency bands have been processed (i.e., warped by lower frequency flow field estimates), the method proceeds to step 1016 , where the final flow field estimate is returned to the matching processor 700 . The method terminates at step 1018 .
  • FIG. 11 is a flow diagram depicting method 1100 for estimating flow field between two iris codes in accordance with exemplary embodiments of the present invention.
  • the method is an implementation of the iris processor 100 .
  • the method begins at step 1102 and proceeds to step 1104 .
  • the pre-processor 102 pre-processes and input image containing an eye to produce a rectified iris image with rectified pupil and iris boundaries, and correction for tilt and corneal distortion.
  • the method proceeds to step 1106 , where the coding processor 104 codes the rectified iris image into a multiresolution iris code.
  • the iris code contains multiple frequency band representations of a polarized version of the rectified iris image.
  • the method then proceeds to step 1108 , where the multiresolution iris code is compared to a set of stored iris codes in a database to determine whether the iris code is contained in the database and returns data associated with the matched iris.
  • the method terminates at step 1110 .
  • FIG. 12 depicts a computer system for implementing the iris processor 100 in accordance with exemplary embodiments of the present invention.
  • the computer system 1200 includes a processor 1202 , various support circuits 1205 , and memory 1204 .
  • the computer system 1200 may include one or more microprocessors known in the art similar to processor 1202 .
  • the support circuits 1205 for the processor 1202 include conventional cache, power supplies, clock circuits, data registers, I/O interface 1207 , and the like.
  • the I/O interface 1207 may be directly coupled to the memory 1204 or coupled through the support circuits 1205 .
  • the I/O interface 1207 may also be configured for communication with input devices and/or output devices such as network devices, various storage devices, mouse, keyboard, display, video and audio sensors, visible and infrared cameras and the like.
  • I/O interfaces 1207 such as an integrated camera (possibly used in association with an on-board standard and/or infrared illumination device) or a wireless communication device (e.g., CNFC, RFID, Bluetooth, Wi-Fi, Wimax, Satcom, etc.), may be directly coupled to the memory 1204 and/or coupled through the support circuits 1205 .
  • one or more software applications or apps may be configured to access the camera, illuminator and/or wireless communication device to accomplish the embodiments disclosed herein.
  • the apps may receive data (e.g., iris data) via the I/O interface 1207 and transmit the data, possibly via the support circuits 1205 to the memory 1204 running the app (e.g., Iris Processor 100 or Iris Biometric Recognition Module 1514 ).
  • the app may perform any of the algorithms disclosed herein, and transmit the results (possibly via memory 1204 and/or support circuits 1205 ) to the I/O interfaces 1207 (e.g. via wireless communication) to a server and/or an additional wireless communication device (e.g., an ATM or security-enabled device such as a safe), to authenticate a user of the device.
  • a server and/or an additional wireless communication device e.g., an ATM or security-enabled device such as a safe
  • the memory 1204 stores non-transient processor-executable instructions and/or data that may be executed by and/or used by the processor 1202 . These processor-executable instructions may comprise firmware, software, mobile apps, and the like, or some combination thereof. Modules having processor-executable instructions that are stored in the memory 1204 comprise an iris processor 1206 .
  • the iris processor 1206 further comprises a pre-processing module 1208 , a coding module 1210 and a matching module 1212 .
  • the memory 1204 may further comprise a database 1214 , though the database 1214 need not be in the same physical memory 1204 as the iris processor 1206 .
  • the database 1214 may be remotely accessed by the iris processor 1206 via a cloud service.
  • the iris processor 1206 may also have several components that may not be co-located on memory 1204 .
  • the pre-processing module 1208 is local to the computer system 1200 or mobile device, while the coding module 1210 and the matching module 1212 may be accessed as cloud services via a wired or wireless network. In other instances, only the matching module 1212 is accessed via a network. Communication between each module may be encrypted as the data travels over the network.
  • the computer system 1200 may be programmed with one or more operating systems 1220 (generally referred to as operating system (OS)), that may include OS/2, Java Virtual Machine, Linux, SOLARIS, UNIX, HPUX, AIX, WINDOWS, WINDOWS95, WINDOWS98, WINDOWS NT, AND WINDOWS2000, WINDOWS ME, WINDOWS XP, WINDOWS SERVER, WINDOWS 8, Mac OS X, IOS, ANDROID among other known platforms.
  • OS operating system
  • OS operating system
  • the memory 1204 may include one or more of the following random access memory, read only memory, magneto-resistive read/write memory, optical read/write memory, cache memory, magnetic read/write memory, and the like, as well as signal-bearing media as described below.
  • the computer system 1200 may be a mobile device such as a cellular phone or tablet device, for example.
  • the mobile device may contain a camera and have the iris processor 1206 stored on memory as an application.
  • the iris processor 1206 may be a part of the operating system 1220 .
  • the iris processor 1206 may be an independent processor, or stored on a different chip than the processor 1202 .
  • often mobile devices have camera processing modules and the iris processor 1206 , or portions of the iris processor 1206 , may reside on the camera processing module, where the imager in the camera is a CCD or CMOS imager.
  • the mobile device may be customized to include some sensors, the type of the camera imager, or the like.
  • An image sensor may include a camera or infrared sensor or illuminator that is able to project images or other objects in the vicinity of the device. It should be understood that image capture can be performed using a single image, multiple images, periodic imaging, continuous image capturing, image streaming, etc. Further, a device can include the ability to start and/or stop image capture, such as when receiving a command from a user, application, or other device.
  • the computing device can include one or more communication elements or networking sub-systems, such as a Wi-Fi, Bluetooth, radio frequency (RF), wired, or wireless communication system.
  • the device in many embodiments can communicate with a network, such as the Internet, and may be able to communicate with other such devices.
  • the device can include at least one additional input element able to receive conventional input from a user.
  • This conventional input can include, for example, a push button, touch pad, touchscreen, wheel, joystick, keyboard, mouse, keypad, or any other such component or element whereby a user can input a command to the device.
  • a device might not include any buttons at all, and might be controlled only through a combination of visual and audio commands, such that a user can control the device without having to be in contact with the device.
  • FIG. 13 illustrates the iris processor 100 in an exemplary operating scenario.
  • a combination of face tracking and a steerable/autofocus iris capture device comprising the iris processor 100 is used to identify multiple individuals within a particular location (whether stationary or moving).
  • the capture device may be placed unobtrusively, e.g., at the side of the corridor, in or near an electronic advertisement display, or in any other public location, and can operate at a large range of capture distances yielding a range of presentation angles, if a device with the capabilities disclosed herein is used.
  • identity information derived from iris biometrics with tracking information from the person tracking system it is possible to associate an identity (or failure to identify an identity) with each person passing through or in the vicinity (stationary or moving) of the active capture region.
  • the iris biometric recognition module 1514 when assembled, is a self-contained unitary module. As such, the iris biometric recognition module 1514 can be incorporated into, for example, a security or locking features, for example a door lock assembly, or any other type of device, apparatus, article, or system that can benefit from an application of iris biometric recognition technology, including, for example, a mobile device, a mobile device compatible with wireless communication, an electronic device used in a financial transaction and/or an electronic advertisement display device.
  • a security or locking features for example a door lock assembly
  • any other type of device, apparatus, article, or system that can benefit from an application of iris biometric recognition technology, including, for example, a mobile device, a mobile device compatible with wireless communication, an electronic device used in a financial transaction and/or an electronic advertisement display device.
  • the iris biometric recognition module 1514 includes a support base 1610 , to which an iris biometric recognition controller 1724 is mounted.
  • a number of support posts e.g., posts 1612 , 1613 , 1614 , 1616 , are coupled to the support base 1610 (by, e.g., a corresponding number of screws or other fasteners 1730 , 1731 , 1732 , 1733 ) ( 1733 not shown).
  • the support posts 1612 , 1613 , 1614 , 1616 are connected to and support a pivot mount base 1618 .
  • an iris imager assembly 1626 and a face imager assembly 1628 Coupled to and supported by the pivot mount base 1618 are an iris imager assembly 1626 and a face imager assembly 1628 .
  • the iris imager assembly 1626 and the face imager assembly 1628 are the same device or utilize one or more of the same components (e.g., the same imaging device).
  • the iris imager assembly 1626 and the face imager assembly 1628 are separate assemblies utilizing different components.
  • the face imager assembly 1628 captures digital images of a human subject, and more particularly, images of the subject's face and eyes, using a face imager 1648 that is equipped with a wide field of view lens.
  • the iris imager assembly 1626 captures digital images of an iris of an eye of the human subject using an iris imager 1644 that is equipped with a narrow field of view lens.
  • both the face imager 1648 and the iris imager 1644 utilize the same type of imager (e.g., a digital camera, such as the Omnivision model no. OV02643-A42A), equipped with different lenses.
  • the face imager 1648 may be equipped with a wide field of view lens such as the Senview model no. TN01920B and the iris imager 1644 may be equipped with a narrow field of view lens such as model no. JHV-8M-85 by JA HWA Electronics Co.
  • a single high resolution imager e.g., a 16+ megapixel digital camera
  • a wide field of view lens (rather than a combination of two cameras with different lenses) to perform the functionality of the iris imager 1644 and the face imager 1648 .
  • the illustrative iris imager assembly 1626 is pivotably coupled to the pivot mount base 1618 by an axle 1622 .
  • the axle 1622 is e.g. removably disposed within a pivot groove 1620 .
  • the pivot groove 1620 is defined in the pivot mount base 1618 .
  • the components of the iris imager assembly 1626 are mounted to an iris pivot mount base 1630 .
  • the iris pivot mount base 1630 is coupled to the axle 1622 and to a support tab 1734 .
  • the support tab 1734 is coupled to a lever arm 1726 by a pivot link 1728 .
  • the lever arm 1726 is coupled to a control arm 1722 .
  • the control arm 1722 is driven by rotation of an output shaft of a motor 1720 .
  • the motor 1720 may be embodied as, for example, a servo motor such as a magnetic induction brushless servo motor (e.g., the LTAIR model no. D03013). Operation of the motor 1720 rotates the control arm 1722 , which causes linear motion of the lever arm 1726 , resulting in linear motion of the tab 1734 . The linear motion of the tab 1734 rotates the axle 1622 in the pivot groove 1620 . Depending on the direction of rotation of the output shaft of the motor 1720 , the resulting rotation of the axle 1622 in the pivot groove 1620 causes the iris pivot mount base 1630 to tilt in one direction or the other, with respect to the pivot mount base 1618 .
  • a servo motor such as a magnetic induction brushless servo motor (e.g., the LTAIR model no. D03013). Operation of the motor 1720 rotates the control arm 1722 , which causes linear motion of the lever arm 1726 , resulting in linear motion of the tab 1734 .
  • iris pivot mount base 1630 tilting in an upwardly direction toward the face imaging assembly 1628 and vice versa.
  • This pivoting capability of the iris pivot mount base 1630 enables the position of the iris imaging assembly 1626 to be mechanically adjusted to accommodate potentially widely varying heights of human subjects (e.g., the human subject 1424 ), ranging from small children to tall adults.
  • the iris imager assembly 1626 is stationary with respect to the pivot mount base 1618 and the ability to detect the irises of human subjects of widely varying heights is provided by other means, e.g., by software or by the use of a column of vertically-arranged iris imagers 1644 coupled to the mount base 1618 .
  • the components of the iris imaging assembly 1626 include the iris imager 1644 , a filter 1646 disposed on or covering the iris imager 1644 , a pair of iris illuminator assemblies 1710 , 1712 each adjacent to, e.g., disposed on opposite sides of, the iris imager 1644 , and a pair of baffles or light guides 1636 , 1638 disposed between the each of the iris illuminator assemblies 1710 , 1712 , respectively, and the iris imager 1644 .
  • Each of the illustrative iris illuminator assemblies 1710 , 1712 includes one or more infrared light sources, e.g., infrared light emitting diodes (LEDs).
  • Each set of N illuminators is bounded by an additional light guide or shield 1714 , 1716 . Diffusers 1632 , 1634 cover the iris illuminator assemblies 1710 , 1712 , respectively.
  • the diffusers 1632 , 1634 may be coupled to the shields 1714 , 1716 respectively (e.g., by an adhesive material).
  • the diffusers 1632 , 1634 correct for the inherent non-uniformity of the light emitted by the illuminators 1711 (e.g., uneven lighting). This non-uniformity may be due to, for example, manufacturing irregularities in the illuminators 1711 .
  • the diffusers 1632 , 1634 may not be required in embodiments in which higher quality illuminators (or different types of illuminators) 1711 are used.
  • the illustrative iris imaging assembly 1626 further includes a pair of visual cue illuminators 1640 , 1642 , which are embodied as emitters of light having a wavelength in the visible light spectrum (e.g., colored light LEDs).
  • the baffles 1636 , 1638 and the shields 1714 , 1716 are configured to prevent stray light emitted by the illuminator assemblies 1710 , 1712 (and, for that matter, the visual cue LEDs 1640 , 1642 ) from interfering with the operation of the iris imager 1644 .
  • the baffles 1636 , 1638 and the shields 1714 , 1716 help ensure that when infrared light is emitted by the illuminator assemblies 1710 , 1712 , only the emitted light that is reflected by the eyes of the human subject (e.g., human subject 1424 ) is captured by the iris imager 1644 .
  • a filter 1646 covers the lens of the iris imager 1644 . The filter 1646 further blocks any extraneous light from entering the lens of the iris imager 1644 .
  • the filter 1646 may be embodied as, for example, an 840 nm narrowband filter and may be embedded in the lens assembly of the iris imager 1644 .
  • filters may be used, depending on the type of illuminators selected for the illuminator assemblies 1710 , 1712 .
  • the selection of the filter 1646 may depend on the type or configuration of the illuminator assemblies 1710 , 1722 , in some embodiments.
  • the illustrative face imager assembly 1628 includes a face imager mount base 1631 .
  • the illustrative face imager mount base 1631 is non-pivotably coupled to the pivot mount base 1618 .
  • the face imager mount base 1631 may be pivotably coupled to the pivot mount base 1618 (e.g., the face imager assembly 1628 and the iris imager assembly 1626 may both be mounted to the pivot mount 1630 ), as may be desired or required by a particular design of the iris biometric recognition module 1514 .
  • the face imager assembly 1628 includes the face imager 1648 and a face illuminator assembly 1650 located adjacent the face imager assembly 1628 .
  • the face imager assembly 1628 and the iris imager assembly 1626 are illustratively arranged so that the face imager assembly 1628 is vertically above the iris imager assembly 1626 when the iris biometric recognition module 1514 is mounted to a vertical structure (such as the door 1416 ).
  • the face imager assembly 1628 and the iris imager assembly 1626 are arranged so that the face imager assembly 1628 is positioned adjacent to a first edge of the pivot mount base 1618 and the iris imager assembly 1626 is positioned adjacent to another edge of the pivot mount base 1618 that is opposite the first edge.
  • the face imager 1648 is secured to the face imager mount base 1631 by a bracket 1633 .
  • the face illuminator assembly 1650 includes one or more infrared light sources 1649 (e.g., infrared LEDs) mounted to a concavely shaped illuminator mount base 1740 .
  • the configuration of the mount base 1740 enables the illuminators 1649 to be arranged at an angle to one another, in order to illuminate the desired portion of the capture zone (e.g., the range of vertical heights H 1 of the eye levels of the anticipated population of human subjects 1424 ).
  • the illuminators 1649 of the face illuminator assembly 1650 and the illuminators 1711 of the iris illuminator assemblies 1710 , 1712 may each be embodied as a high power 840 nm infrared emitter (e.g., model no. OV02643-A42A available from OSRAM Opto Semiconductors).
  • the illustrative iris biometric recognition controller 1724 is embodied as an integrated circuit board including a microprocessor (e.g., model no. MCIMX655EVM10AC available from Freescale Semiconductor).
  • the iris biometric recognition controller 1724 is configured to control and coordinate the operation of the face illuminator assembly 1650 , the face imager 1648 , the iris illuminator assemblies 1710 , 1712 , and the iris imager 1644 , alone or in combination with other components of the iris biometric recognition module 1514 .
  • the iris biometric recognition module 1514 appears large in structure, it is possible to scale down the size of the iris biometric recognition module 1514 to fit within smaller items or devices, for example, a mobile device (e.g., a mobile telephone, a tablet, or any other portable device).
  • the iris biometric recognition module 1514 may be implemented, for example, as part of a forward and/or rearward facing camera in a mobile device.
  • the iris biometric recognition module 1514 may be implemented within a mobile device in any other suitable manner.
  • One or more cameras or other image sensors within the mobile device may capture image or video content to be utilized by the iris biometric recognition module 1514 .
  • the one or more cameras may include, or be based at least in part upon any appropriate technology, such as a CCD or CMOS image sensor having a sufficient resolution, focal range, and/or viewable area, to capture an image of the user when the user is operating the device.
  • the iris biometric-enabled access control system 1800 is shown in the context of an environment 1810 that may be created during the operation of the iris biometric recognition module 1514 (e.g., a physical and/or virtual execution or “runtime” environment).
  • the iris biometric recognition module 1514 includes a number of computer program components 1818 , each of which is embodied as machine-readable instructions, modules, data structures and/or other components, and may be implemented as computer hardware, firmware, software, mobile app, or a combination thereof, in memory of the controller board 1724 , for example.
  • the iris biometric recognition module computer program components 1818 include an iris image capture module 1820 .
  • the illustrative iris image capture module 1820 includes a face finder module 1822 , an iris finder module 1824 , a face/iris imager control module 1826 , and a face/iris illuminator control module 1828 .
  • the face/iris imager control module 1826 controls a face/iris imager 1812 (e.g., the face imager 1648 and/or the iris imager 1644 ) by transmitting face imager control signals 1842 to the face/iris imager 1812 to capture digital images of a human subject 1804 entering or located in a tracking and capture zone 1802 .
  • the iris biometric recognition module 1514 may be equipped with a motion sensor that can detect the human subject 1804 in the tracking and capture zone 1802 .
  • the face/iris imager control module 1826 may initiate operation of the face/iris imager(s) 1812 in response to a motion detection signal received from the motion sensor.
  • the presence of a human subject 1804 can be detected using an image processing routine that recognizes a face in the field of view of the face/iris imager 1812 .
  • the iris biometric recognition module 1514 can utilize iris images captured from moving subjects and/or subjects that are at a distance that is greater than, e.g., 45 cm away from the iris imaging device.
  • the illustrative face finder module 1822 executes a face recognition algorithm (e.g., FaceRecognizer in OpenCV), to determine whether an image captured by the face/iris imager 1812 (e.g., by a wide field of view camera) includes a human face. If the face finder module 1822 detects a human face, the face finder module 1822 returns the face location 1848 , e.g., bounding box coordinates of the detected face within the captured image. In response to the face detection, the face/iris imager control module 1826 configures the face/iris imager 1812 to capture an image of an iris of the detected face.
  • a face recognition algorithm e.g., FaceRecognizer in OpenCV
  • the illustrative face/iris imager control module 1826 may compute the tilt angle by which to tilt the iris imager assembly 1626 based on the bounding box coordinates of the detected face. This can be done by approximating the linear distance from the face/iris imager 1812 to the detected face, if the location and the field of view of the face/iris imager 1812 are known.
  • the proper tilt angle for the face/iris imager 1812 can be derived from the geometry of the triangle formed by connecting the location of the face/iris imager 1812 to the top and bottom edges of the bounding box of the detected face.
  • the face/iris imager control module 1826 operates the motor 1720 to achieve the computed tilt angle of the face/iris imager 1812 .
  • the iris finder module 1824 locates an eye and then the iris of the eye, on the human face, by executing eye and iris detection algorithms (e.g., the algorithms mentioned above with reference to FIGS. 1-13 ).
  • the face/iris imager control module 1826 initiates the process of capturing images of the iris by transmitting iris imager control signals 1842 to the face/iris imager 1812 .
  • iris detection and image capture processes can be performed, for example, using the techniques described above with reference to FIGS. 1-13 .
  • the iris image capture module 1820 interfaces with a face/iris illuminator control module 1828 to coordinate, e.g., synchronize 1852 , the operation of the face/iris imager 1812 and face/iris illuminators 1818 .
  • the control modules 1826 , 1828 synchronize the operation of the face illuminator assembly 1650 with the capturing of face images by the face imager 1648 . This helps ensure consistent face image quality irrespective of the available ambient lighting conditions.
  • the coordination of the face image capture and the operation of the face illuminator assembly 1650 is analogous to traditional flash photography, albeit using infrared light rather than visible light.
  • the control modules 1826 , 1828 synchronize the operation of the iris illuminators 1816 (e.g., iris illuminator assemblies 1710 , 1712 ) with the capturing of iris images by the iris imager 1644 .
  • the iris imager control module 1826 operates the iris imager 1644 using a focal sweep technique in which several (e.g., 10-15 or more) images of the iris are captured in rapid succession (e.g., at a shutter speed in the range of about 5 frames per second).
  • the iris illuminator control module 1828 pulses/strobes the iris illuminators 1710 , 1712 at the same rate/frequency. This helps ensure that at least one good quality iris image is obtained irrespective of the available ambient lighting conditions and regardless of whether the subject is moving or whether the view of the iris is obstructed or distorted.
  • the coordination of the iris image capture and the operation of the iris illuminators 1710 , 1712 is analogous to traditional “red eye reduction” flash photography, except that the images of the iris are taken at the same time as the pulsing/strobing of the iris illuminators 1710 , 1712 rather than after the pulsing/strobing is completed (and also, using infrared illuminators rather than visible light).
  • the iris image capture module 1820 outputs or otherwise makes available the resulting iris images 1854 to an iris image processing and matching module 1830 .
  • the iris image processing and matching module 1830 processes the images by, e.g., removing portions of the image that depict eyelids and eyelashes and adjusting for enlarged pupils, and producing the “iris code” in, for example, the manner described above with reference to FIGS. 1-13 .
  • the iris image processing and matching module 1830 compares the processed iris images 1854 or usable portions thereof, or the iris code, to reference image data 1836 , to determine whether any of the captured iris images 1854 match an image stored in the reference images 1836 .
  • the reference image data 1836 includes iris image samples and/or related data that has been obtained previously, e.g., through an enrollment procedure. If the iris images 1854 are not found to match any of the images in the reference images data 1836 , the iris image processing and matching module 1830 may initiate an enrollment procedure. That is, the iris biometric recognition module 1514 can be configured to perform iris image enrollment directly at the device, if required or desired for a particular implementation. To do this, the iris image processing and matching module 1830 passes the collected iris image(s) 1862 to an iris image enrollment module 1834 . To complete the enrollment process, the illustrative iris image enrollment module 1834 may execute an image quality analysis on one or more of the reference image candidates 1862 .
  • An iris image may be added to the reference images data 1836 if the image quality analysis indicates that the image is suitable for use as a reference image.
  • the iris image enrollment module 1834 may analyze a number of different image quality factors, such as: the amount of the iris that is exposed in the image (e.g., the person is not squinting or blinking), the sharpness of the image, and the number of artifacts in the image (e.g., the number of eyelashes, specularities, etc.).
  • the iris biometric recognition module 1514 outputs or otherwise makes available an iris match determination 1856 .
  • the iris match determination 1856 may be embodied as a simple “positive” or “negative” indication, or may include other information (such as person-identifying information connected with the matched iris image), alternatively or in addition.
  • an access control module 1832 executes business logic encoded as, e.g., computer program logic, to determine how or even whether the access control system 1800 should respond to the iris match determination data 1856 .
  • the method 1900 may be embodied as computerized programs, routines, logic and/or instructions, which may be embodied in hardware, software, mobile apps, firmware, or a combination thereof, of the iris biometric recognition module 1514 and/or one or more other systems or devices in communication with the iris biometric recognition module 1514 .
  • the module 1514 detects a human subject approaching the iris biometric recognition module 1514 .
  • the module 1514 may analyze signals received from a wide field of view camera (e.g., the face imager 1648 ) or may analyze signals received from a motion sensor monitoring a capture zone of the iris biometric recognition module 1514 .
  • the module 1514 locates the face and eyes of the approaching subject in relation to a ground plane and in relation to the iris biometric recognition module 1514 .
  • the module 1914 may, in block 1914 , control the face illuminators 1649 to illuminate (with infrared light) the area in which the human subject, or more particularly, the subject's face, is detected.
  • the module 1514 configures the iris imager 1644 to collect images of an iris of an eye of the approaching subject.
  • configuring the iris imager may involve operating a motor to tilt a platform to which the iris imager is mounted. Alternatively, the configuring may be performed e.g. by software controlling the lens focus and/or field of view of the iris imager. In any event, the procedure of block 1916 aligns the iris imager with the eye (or more particularly the iris) of the approaching subject.
  • the module 1514 activates the visual cue illuminators 1640 , 1642 , to try to draw the subject's attention or visual focus toward the iris biometric recognition module 1514 .
  • the visual cue illuminators 1640 , 1642 are typically activated after the subject's face is detected and the iris imager is configured (e.g., mechanically positioned), in order to draw the subject's eyes in-line with the iris imager camera.
  • the iris biometric recognition module 1514 and any associated hardware or software may be configured to identify actions by the person, such as viewing an advertisement, when the advertisement was viewed, how long the user viewed the advertisement, or any other habits, actions, characteristics, or attributes of a subject.
  • the iris biometric recognition module 1514 enters a loop 1920 in which the module 1514 coordinates the operation of the iris illuminator and the iris imager in rapid succession to obtain multiple images of the iris (e.g., frame rate of the iris imager and short-duration pulse frequency of the iris illuminator are coordinated/synchronized). More specifically, in block 1922 , the module 1514 causes the iris illuminator assemblies to issue short pulses of high intensity infrared light. As discussed above with reference to FIGS.
  • a light intensity of the illumination source (e.g., illuminators 1711 ) is increased during strobe to maintain a predetermined signal-to-noise (S/N) ratio, while an average irradiance of the illumination source over the course of the strobing remains below a safety threshold.
  • the module 1514 causes the iris imager to capture a series of images of the pulse-illuminated iris (using, e.g., a “focal sweep” technique). That is, the iris image captures are timed to substantially coincide with the short, high intensity pulses of illumination, resulting in a “freeze” effect on the subject if the subject is in motion.
  • other alternatives to the focal sweep technique can be used, e.g.: auto focus on a target spot, if the subject is standing still for a length of time, or by using a fixed lens to provide a large fixed focus area.
  • the module 1514 determines whether to use any of the captured iris images are candidates to be used for enrollment purposes. If an iris image is a candidate to be used for enrollment, the module 1514 performs an iris image quality analysis on the image in block 1928 , and updates the reference database of iris images if the quality analysis is successful.
  • the iris biometric recognition module 1514 and any associated hardware or software may be configured to also update the database to include habits, patterns, interests, characteristics, and attributes, such as when a person was in a certain location, how often they are in that location for that amount of time, etc.
  • the module 1514 performs iris image processing and matching in accordance with, for example, the techniques described above with reference to FIGS. 1-13 .
  • the module 1514 selects a subset of the captured iris images for matching purposes, based on image quality, size of the iris depicted in the image, and/or other factors.
  • the module 1514 identifies a usable portion of the iris image(s) selected in block 1930 (using, e.g., the segmentation techniques described above).
  • the “usable portion” of an iris image may correspond to the iris code, in some embodiments.
  • the module 1514 compares the usable portion of the iris image identified in block 1932 to one or more reference images (e.g., the reference images 1836 ). In block 1936 , the module 1514 determines whether the comparison performed in block 1934 results in an iris match.
  • While the flow diagram of FIG. 17 references an approaching subject, the steps discussed in relation to the flow diagram of FIG. 17 may be performed for a moving or a stationary subject.
  • the flow diagram in FIG. 17 may be utilized to authenticate a subject that is waiting for a bus or otherwise stationary for any reason or to authenticate a user that desires to access a mobile device, access wireless communication, make a financial transaction, etc.
  • the subject/user may remain stationary in order to draw the subject's eyes in line with the iris imager camera on the mobile device and the iris image may then be captured and matched as discussed in detail above.
  • An “iris match” as determined by the module 1514 may refer to, among other things, a numerical score that represents the probability that the captured iris image corresponds to the known iris image of a specific person.
  • the “iris match” parameters are tunable, and can be set, for example, based on the accuracy requirements of a particular implementation of the module 1514 (e.g., how stringent is the test for acceptance of the subject as matching the identity of a known subject).
  • the illustrative module 1514 computes a Hamming distance between an iris code representative of the captured iris image and the iris code representative of a reference iris image.
  • the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. Put another way, the Hamming distance measures the minimum number of substitutions required to change one string into the other, or the number of errors that transformed one string into the other. So, for example, if the module 1514 uses a Hamming distance of 0.35, that corresponds to a 1:133,000 false accept rate. Similarly, if the module 1514 is configured to use a Hamming distance of 0.28, the false accept rate is 1:10E11.
  • an iris biometric collection device (which may be incorporated into another type of device, such as a fixed or mobile electronic device) that uses strobe illumination above the continuous wave eye safe limit would allow the documentation that the actual person was at that location, accessed an item, used a service, or obtained a benefit at the specific time.
  • the use of the strobe illumination above the continuous wave eye safe limits allows collection of the biometric image in all lighting conditions (indoor, outdoor, bright sunlight, extreme darkness) and without requiring the subject or user to be stationary.
  • the disclosed devices can be equipped with wired and/or wireless connectivity to maintain the most recent data on the device.
  • iris As the enabling biometric allows identity to be determined without touching the subject as in a fingerprint and is less obtrusive than other biometric identification modalities.
  • the implementations disclosed herein allow the collection of a high quality record with cooperative or uncooperative subjects including covert operations. Recording of the person's iris at a location at a specific time can be used verifiable proof that the specific person was at a particular location. The relevant location information can be captured as well (e.g., by a Global Positioning System or cellular location-based system), and stored along with the iris image and/or associated information.
  • the biometric collection device described may be used alone or in conjunction with other collection and authentication techniques (e.g., PIN, pattern, different biometric) if multi-levels of authentication are desired.
  • Examples of events, activities or locations where the ability to document/record the presence or access of a person(s) to the location at a specific times are as follows: safes and safety deposit boxes; amusement parks; animal tagging and tracking (domestic, wild, aquatic, etc.); appliances (refrigerator, oven, gym equipment); assisted living facilities; automated teller machine; automated gate control; background checks; blood donors/red cross; brokerage account; casino; check cashing agencies; child day care facilities; commercial shipping facility; cruise ships; datacenter cabinets; detox centers; document screening activity; driver vehicle enrollment; drug testing collection location; entertainment facilities (club, theater, concert hall, skyboxes, stadiums, etc.); entitlement programs activities; ez pass authorization; fire drills; first responders securing an event; gun access; half-way houses; health club/gym/spa; hospitals; hotels/motels; insurance claim validations; large clinical studies; law enforcement activities; library; medical lab (quest/labcorp); mining operations; parole tracking; patient history; pay per usage; prisons; property storage locations;
  • FIG. 19 illustrates the iris biometric module 1514 of FIGS. 14 and 15 (or any other embodiment of the iris biometric module) implemented within a mobile device 2100 (e.g. a mobile telephone, a tablet, an electronic watch or bracelet, or any other mobile device).
  • the mobile device 2100 may be any mobile or semi-mobile electronic device, for example, a personal computer, laptop computer, tablet computer, e-reader, smartphone, personal data assistant, set-top box, digital media player, microconsole, home automation system, or other computing device having a processor, central processing unit, microprocessor, or other suitable processor.
  • the mobile device 2100 may include a display 2102 , which may be a monitor, liquid crystal display screen, light-emitting diode (LED or organic LED (OLED)) screen, or other output-only video display.
  • the display 2102 may function as an input device, such as a capacitive, resistive, or inductive touchscreen.
  • the iris biometric module 1514 may be configured as an add-on to the mobile device 2100 or may be incorporated into the mobile device 2100 , for example, as part of a camera 2104 within the mobile device 2100 . In some embodiments in which the iris biometric module 1514 is incorporated within the mobile device 2100 , the iris imager assembly 1626 may replace or work in conjunction with the mobile device camera 2104 .
  • the iris processor 100 described above with respect to FIGS. 1-13 may be implemented within the mobile device 2100 system in any suitable manner, for example, as discussed with respect to FIG. 12 .
  • the iris processor 100 may be a separate processor within the mobile device memory or the mobile device 2100 may include a single processor that implements all mobile device functionality, including the functionality of the iris processor 100 .
  • the iris processor 100 may be used to identify and authorize a user (or users) of the mobile device 2100 . By using identity information derived from iris biometrics, the iris processor 100 may determine whether the user is an authorized user of the mobile device 2100 and/or may determine what applications, settings, and/or other features of the mobile device 2100 may be accessed by the user.
  • a first user may have permission to use all or a first subset of applications, settings, and/or features (e.g., a parent that can access all applications, settings, and/or other features on the mobile device 2100 ) and a second user may have permission to use all or a second subset of applications, settings, and/or features (e.g., a child that may only be able to access child-friendly applications on the mobile device 2100 ).
  • the mobile device 2100 may be configured to require identification and authorization to access the device (i.e., as a login procedure) and/or may be configured to require identification and authorization to access one or more applications, settings, and/or features on the mobile device 2100 .
  • the iris processor 100 may optionally be implemented within a display or marquee for displaying advertisements and/or a device co-located or associated with a display or marquee.
  • FIG. 18 a simplified block diagram of an iris biometric recognition-enabled system 2000 is shown. While the illustrative embodiment 2000 is shown as involving multiple components and devices, it should be understood that the system 2000 may constitute a single device, alone or in combination with other devices.
  • the system 2000 includes an iris biometric recognition module 2010 , an iris biometric-controlled mechanism 2050 , one or more other devices and/or systems 2062 , and a server computing device 2070 .
  • Each or any of the devices/systems 2010 , 2050 , 2062 , 2070 may be in communication with one another via one or more electronic communication links 2048 .
  • the system 2000 or portions thereof may be distributed across multiple computing devices as shown. In other embodiments, however, all components of the system 2000 may be located entirely on, for example, the iris biometric recognition module 2010 or one of the devices 2050 , 2062 , 2070 . In some embodiments, portions of the system 2000 may be incorporated into other systems or computer applications. Such applications or systems may include, for example, commercial off the shelf (COTS) or custom-developed cameras, operating systems, authentication systems, or access control systems.
  • COTS commercial off the shelf
  • application or “computer application” may refer to, among other things, any type of computer program or group of computer programs, whether implemented in software, hardware, or a combination thereof, and includes self-contained, vertical, and/or shrink-wrapped software applications, distributed and cloud-based applications, and/or others. Portions of a computer application may be embodied as firmware, as one or more components of an operating system, a runtime library, an application programming interface (API), as a self-contained software application, or as a component of another software application, for example.
  • API application programming interface
  • the illustrative iris biometric recognition module 2010 includes at least one processor 2012 (e.g. a microprocessor, microcontroller, digital signal processor, etc.), memory 2014 , and an input/output (I/O) subsystem 2016 .
  • the module 2010 may be embodied as any type of electronic or electromechanical device capable of performing the functions described herein.
  • the I/O subsystem 2016 can include, among other things, an 1 /O controller, a memory controller, and one or more I/O ports.
  • the processor 2012 and the I/O subsystem 2016 are communicatively coupled to the memory 2014 .
  • the memory 2014 may be embodied as any type of suitable computer memory device, including fixed and/or removable memory devices (e.g., volatile memory such as a form of random access memory or a combination of random access memory and read-only memory, such as memory cards, e.g., SD cards, memory sticks, hard drives, and/or others).
  • volatile memory such as a form of random access memory or a combination of random access memory and read-only memory, such as memory cards, e.g., SD cards, memory sticks, hard drives, and/or others.
  • the I/O subsystem 2016 is communicatively coupled to a number of hardware and/or software components, including computer program components 1818 such as those shown in FIG. 16 or portions thereof, illuminator(s) 2030 (e.g., face and iris illuminators 1816 ), an imaging subsystem 2032 (which may include separate face and iris imagers 2034 , 2036 ), a motor 2038 , and one or more motion and/or location sensors 2040 .
  • computer program components 1818 such as those shown in FIG. 16 or portions thereof
  • illuminator(s) 2030 e.g., face and iris illuminators 1816
  • an imaging subsystem 2032 which may include separate face and iris imagers 2034 , 2036
  • motor 2038 which may include separate face and iris imagers 2034 , 2036
  • motion and/or location sensors 2040 one or more motion and/or location sensors 2040 .
  • an “imager” or “camera” may refer to any device that is capable of acquiring and recording two-dimensional (2D) or three-dimensional (3D) still or video images of portions of the real-world environment, and may include cameras with one or more fixed camera parameters and/or cameras having one or more variable parameters, fixed-location cameras (such as “stand-off” cameras that are installed in walls or ceilings), and/or mobile cameras (such as cameras that are integrated with consumer electronic devices, such as laptop computers, smart phones, tablet computers, wearable electronic devices and/or others.
  • the I/O subsystem 2016 is also communicatively coupled to one or more data storage devices 2020 , a communication subsystem 2028 , a user interface subsystem 2042 , and a power supply 2044 (e.g., a battery).
  • the user interface subsystem 2042 may include, for example, hardware or software buttons or actuators, a keypad, a display device, visual cue illuminators, and/or others. It should be understood that each of the foregoing components and/or systems may be integrated with the module 2010 or may be a separate component or system that is in communication with the I/O subsystem 2016 (e.g., over a network or a bus connection).
  • the UI subsystem 2042 includes a push button or similar mechanism for initiating the iris image enrollment process described above.
  • the iris image enrollment process takes place off the module 2010 , e.g., on another device, such as a desktop computing device.
  • iris image enrollment capabilities can be provided at a “central” module or server computer and then propagated to other modules 2010 , e.g., via a communications network. For instance, in access control applications, enrollment may take place at a main entrance to a facility or security command center. Privileges can be determined at the central module or server and then pushed out to or “downloaded” by the individual door lock assemblies in the facility.
  • the data storage device 2020 may include one or more hard drives or other suitable data storage devices (e.g., flash memory, memory cards, memory sticks, and/or others).
  • portions of the system 2000 containing data or stored information e.g., a database of reference images 1836 , iris matching data/rules 2024 (e.g., access control logic or business logic for determining when an iris match has occurred and what to do when an iris match does or does not occur), iris imager configuration data/rules 2026 (e.g., mapping tables or functions for mapping iris imager tilt angles to motor control parameters), and/or other data, reside at least temporarily in the storage media 2020 .
  • iris matching data/rules 2024 e.g., access control logic or business logic for determining when an iris match has occurred and what to do when an iris match does or does not occur
  • iris imager configuration data/rules 2026 e.g., mapping tables or functions for mapping iris imager tilt angles to motor control parameters
  • other data
  • Portions of the system 2000 may be copied to the memory 2014 during operation of the module 2010 , for faster processing or other reasons.
  • the communication subsystem 2028 communicatively couples the module 2010 to one or more other devices, systems, or communication networks, e.g., a local area network, wide area network, personal cloud, enterprise cloud, public cloud, and/or the Internet, for example.
  • the communication subsystem 2028 may include a databus, datalink, one or more wired or wireless network interface software, firmware, or hardware, for example, as may be needed pursuant to the specifications and/or design of the particular embodiment of the module 2010 .
  • the iris biometric-controlled mechanism 2050 , the other device(s)/system(s) 2062 , and the server computing device 2070 each may be embodied as any suitable type of computing device, electronic device, or electromechanical device capable of performing the functions described herein, such as any of the aforementioned types of devices or other electronic devices.
  • the server computing device 2070 may operate a “back end” portion of the iris biometric computer program components 1818 , by storing the reference images 1836 , iris matching data/rules 2024 , and/or iris imager configuration data/rules 2026 , in a data storage device 2080 or by performing other functions of the module 2010 .
  • components of the server computing device 2070 having similar names to components of the module 2010 described above (e.g., processor 2072 , memory 2074 , I/O subsystem 2076 ) may be embodied analogously.
  • the illustrative server computing device 2070 also includes a user interface subsystem 2082 , a communication subsystem 2084 , and an iris image enrollment system 2078 (which may capture and evaluate iris images for enrollment purposes, similar to the iris image enrollment module 1834 described above).
  • each of the mechanisms/devices/systems 2050 , 2062 may include components similar to those described above in connection with the module 2010 and/or the server computing device 2070 , or another type of electronic device (such as a portable electronic device, embedded system (e.g., a vehicle infotainment system or smart appliance system).
  • a portable electronic device such as a portable electronic device, embedded system (e.g., a vehicle infotainment system or smart appliance system).
  • the iris biometric-controlled mechanism 2050 includes one or more processors 2052 , memory 2054 , and an I/O subsystem 2056 (analogous to the processor 2012 , memory 2014 , and I/O subsystem 2016 ), an on-board power supply 2058 (e.g., a battery), and an access control module 1832 (e.g., to perform access control logic in response to an iris match determination made by the module 2010 ).
  • the system 2000 may include other components, sub-components, and devices not illustrated in FIG. 18 for clarity of the description. In general, the components of the system 2000 are communicatively coupled as shown in FIG.
  • one or more electronic communication links 2048 e.g., signal paths, which may be embodied as any type of wired or wireless signal paths capable of facilitating communication between the respective devices and components, including direct connections, public and/or private network connections (e.g., Ethernet, Internet, etc.), or a combination thereof, and including short range (e.g., Near Field Communication) and longer range (e.g., Wi-Fi or cellular) wireless communication links.
  • signal paths e.g., signal paths, which may be embodied as any type of wired or wireless signal paths capable of facilitating communication between the respective devices and components, including direct connections, public and/or private network connections (e.g., Ethernet, Internet, etc.), or a combination thereof, and including short range (e.g., Near Field Communication) and longer range (e.g., Wi-Fi or cellular) wireless communication links.
  • an iris scanning system may be installed in a public location co-located with an advertisement.
  • iris images may be collected. These images may be compared to, for example, templates in a historical database. If no match for the iris is found, the hardware/software may collect and store data and other information regarding the iris, and there would be no change in the advertisement. However, if a match is found, the advertisement may be tailored to the individual, if possible, and additional subject data may be displayed. If it is not possible to tailor the ad to the individual, the ad will not be changed and additional subject data may be collected.
  • a user may opt to enroll their iris image(s), and the user iris images may be enrolled and stored in database.
  • a software application may request permission to scan the iris of the user. If the user refuses permission to scan the iris of the user, the transaction may be terminated. However, if the user grants permission to scan the iris of the user, the software application may collect the images of the iris of the user and match the images against the database. If the image is rejected, the transaction may be terminated; however, if the image is accepted, the transaction may be completed
  • An embodiment of the technologies disclosed herein may include any one or more, and any combination of, the examples described below.
  • a biometric content display device includes a face imager device having a field of view, a camera, an illuminator device including one or more illuminators that emit light into the field of view, an electronic video display device to display a personalized electronic advertising content to a person within the field of view, the electronic advertising content comprising targeted marketing for the person, an input device, memory storing program instructions, and a processor communicatively coupled to the face imager, the camera, the illuminator device, the electronic video display, the input device, and the memory, and further communicatively coupled to a database storing a reference iris image, the processor executing the program instructions to: receive an input signal from the input device detecting the presence of a human face in a capture zone defined at least in part by the field of view, responsive to detection of the human face in the capture zone: align the camera with the iris of the person; send a first control signal to the illuminator device, the first control signal activating a synchronous emission of light by the one or more
  • An example 2 includes the subject matter of claim 1 , wherein the electronic display device comprises an electronic billboard or marquee.
  • An example 3 includes the subject matter of any of claim 1 or 2 , wherein the processor executes the program instructions to determine that the first digital image of the plurality of digital images does not match the reference iris image, store, in the database, the plurality of digital images from the camera, and maintain a display of a current electronic advertisement on the electronic video display device.
  • An example 4 includes the subject matter of any of claims 1 , 2 , or 3 , wherein the face imager device, the camera, the illuminator device, the input device, the memory, and the processor are incorporated into the electronic video display.
  • An example 5 includes the subject matter of any of claims 1 , 2 , 3 , or 4 , wherein the illuminator device comprises at least one infrared illuminator and the program instructions operate the illuminator device to illuminate the iris with a pulsing or strobe illumination operation above continuous wave eye safe limits.
  • An example 6 includes the subject matter of any of claims 1 , 2 , 3 , 4 , or 5 , wherein the electronic video display device comprises a mobile device, the face imager device, the camera, the illuminator device, the input device, the memory, and the processor are incorporated into the mobile device, and the personalized electronic advertising content comprises a layout or advertisement displayed on the mobile device and specific to the person.
  • An example 7 includes the subject matter of any of claims 1 , 2 , 3 , 4 , 5 , or 6 , wherein the face imager device, the camera, the input device, the memory, and the processor are communicatively coupled, via wired or wireless internet connectivity, to a local, remote or cloud database and the local, remote or cloud database stores at least one record of: the determination that the first digital image of the plurality of digital images matches the reference iris image, a demographic characteristic or preference of the user, an action performed by the person, a timestamp for the action, an amount of time that the action was performed, and at least one habit, pattern or interest associated with the person.
  • An example 8 includes the subject matter of any of claims 1 , 2 , 3 , 4 , 5 , 6 , or 7 , wherein the personalized electronic advertising content specific to the person on the electronic video display device is terminated after a predetermined time period.
  • an iris biometric content display system comprises an iris biometric recognition module that authenticates a person to display a plurality of electronic content specific to the person, the iris biometric recognition module comprising: a face imager device, an iris imager device comprising a lens, an illuminator device, an electronic video display device to display the plurality of electronic content, memory storing program instructions, and one or more processors communicatively coupled to the face imager device, the iris imager device, the illuminator device, the electronic video display device, and the memory, the one or more processors executing the program instructions to: detect the presence of a human face in a capture zone defined at least in part by a field of view of the face imager device, responsive to detection of the human face in the capture zone, align the lens of the iris imager device with an iris of the person, operate the illuminator device to illuminate the iris, operate the iris imager device to produce a digital image of the iris, compare the digital image to a reference
  • An example 10 includes the subject matter of example 9, wherein the electronic video display comprises an electronic advertisement display, the iris imager device, the illuminator device, and the iris biometric recognition module are incorporated into the electronic advertisement display, and the plurality of content comprises an electronic advertisement specific to the person.
  • An example 11 includes the subject matter of any of examples 9 or 10, wherein the illuminator device comprises at least one infrared illuminator and the one or more processors executing the program instructions operates the illuminator device to illuminate the iris with a pulsing or strobe illumination operation above continuous wave eye safe limits.
  • An example 12 includes the subject matter of any of examples 9, 10, or 11, wherein the electronic video display device comprises a mobile device, the iris imager device, the illuminator device and the iris biometric recognition module are incorporated into a mobile device, and the plurality of content comprises a layout or advertisement displayed on the mobile device and specific to the person.
  • An example 13 includes the subject matter of any of examples 9, 10, 11, or 12, wherein the iris biometric recognition module is communicatively coupled, via wired or wireless internet connectivity, to a local, remote or cloud database and the local, remote or cloud database stores at least one record of: the comparison of the extracted portions of the selected images and the reference image, an action performed by the person, a timestamp for the action, an amount of time that the action was performed, and at least one habit, pattern, or interest associated with the person.
  • An example 14 includes the subject matter of any of examples 9, 10, 11, 12, or 13, wherein the one or more processors execute the program instructions to terminate the display of the plurality of content specific to the person on the electronic video display device after a predetermined time period.
  • a method comprising authenticating a person to display a plurality of electronic content specific to the person via an iris biometric recognition module comprises program instructions stored in memory and causing one or more processors to execute the steps of: detecting the presence of a human face in a capture zone defined at least in part by a field of view of a face imager device, responsive to detection of the human face in the capture zone, aligning a lens of an iris imager device with the iris of the person, operating an illuminator device to illuminate the iris, operating the iris imager device to produce a digital image of the iris, comparing the digital image to a reference iris image, and responsive to a determination that the digital image matches the reference iris image: querying a database for data specific to the person, comparing the data with the plurality of content, and in response to a determination that the data matches the plurality of content, displaying the plurality of content specific to the person on the electronic video display device.
  • An example 16 includes the subject matter of example 15, wherein authenticating the person to display a plurality of electronic content specific to the person via the iris biometric recognition module is executed via an electronic advertisement specific to the person displayed on an electronic advertising display incorporating the iris imager device, the illuminator device and the one or more processors.
  • An example 17 includes the subject matter of any of examples 15 or 16, wherein operating the illuminator device comprises operating an infrared illuminator and operating the infrared illuminator comprises illuminating the iris with a pulsing or strobe illumination operation above continuous wave eye safe limits.
  • An example 18 includes the subject matter of any of examples 15, 16, or 17, wherein authenticating the person to access the object via the iris biometric recognition module is executed via a layout or advertisement displayed on a mobile device incorporating the iris imager device, the illuminator device and the one or more processors.
  • An example 19 includes the subject matter of any of examples 15, 16, 17, or 18, and further comprises accessing via wired or wireless internet connectivity, from a local, remote, or cloud database: the comparison of the extracted portions of the selected images and the reference image, an action performed by the person, a timestamp for the action, an amount of time that action was performed, and at least one habit, pattern, or interest associated with the person.
  • An example 20 includes the subject matter of any of claims 15 , 16 , 17 , 18 , or 19 , and further comprises terminating the display of the plurality of content specific to the person on the electronic video display device after a predetermined time period.
  • references in the specification to “an embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is believed to be within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly indicated.
  • Embodiments in accordance with the disclosure may be implemented in hardware, firmware, software, or any combination thereof. Embodiments may also be implemented as instructions stored using one or more machine-readable media, which may be read and executed by one or more processors.
  • a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device or a “virtual machine” running on one or more computing devices).
  • a machine-readable medium may include any suitable form of volatile or non-volatile memory.
  • Modules, data structures, blocks, and the like are referred to as such for ease of discussion, and are not intended to imply that any specific implementation details are required.
  • any of the described modules and/or data structures may be combined or divided into sub-modules, sub-processes or other units of computer code or data as may be required by a particular design or implementation.
  • specific arrangements or orderings of schematic elements may be shown for ease of description. However, the specific ordering or arrangement of such elements is not meant to imply that a particular order or sequence of processing, or separation of processes, is required in all embodiments.
  • schematic elements used to represent instruction blocks or modules may be implemented using any suitable form of machine-readable instruction, and each such instruction may be implemented using any suitable programming language, library, application-programming interface (API), and/or other software development tools or frameworks.
  • schematic elements used to represent data or information may be implemented using any suitable electronic arrangement or data structure.
  • connections, relationships or associations between elements may be simplified or not shown in the drawings so as not to obscure the disclosure.

Abstract

An iris biometric recognition module includes technology for capturing images of an iris of an eye of a person, whether the person is moving or stationary. The iris biometric recognition technology can perform an iris matching procedure for, e.g., identity purposes by querying a database for data related to an identified person, comparing the data with a plurality of content, and, in response to a determination that the data matches at least one piece of the plurality of content, display the plurality of content specific to the person on a display device.

Description

    CROSS-REFERENCES TO RELATED APPLICATIONS
  • This application is a continuation-in-part of Perna et al. U.S. application Ser. No. 14/509,356, filed Oct. 8, 2014, and entitled “Iris Biometric Recognition Module and Access Control Assembly”, which claims priority to and the benefit of U.S. Provisional Patent Application Ser. No. 61/888,130, filed Oct. 8, 2013; a continuation-in-part of Perna et al. U.S. patent application Ser. No. 14/509,366, filed Oct. 8, 2014, and entitled “Iris Biometric Recognition Module and Access Control Assembly”, which claims priority to and the benefit of U.S. Provisional Patent Application Ser. No. 61/888,130, filed Oct. 8, 2013; and claims priority to and the benefit of U.S. Provisional Patent Application Ser. No. 62/054,413, filed Sep. 24, 2014, and entitled “Collecting and Targeting Marketing Data and Information Based Upon Iris Identification”.
  • BACKGROUND
  • Many existing iris recognition-based biometric devices impose strict requirements on the iris image capture process in order to meet the needs of iris biometric analysis. For example, many existing devices can only utilize images that have a clear, straight-on view of the iris. In order to obtain such images, existing devices typically require the human subject to be stationary and located very near to the iris image capture device.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • This disclosure is illustrated by way of example and not by way of limitation in the accompanying figures. The figures may, alone or in combination, illustrate one or more embodiments of the disclosure. Elements illustrated in the figures are not necessarily drawn to scale. Reference labels may be repeated among the figures to indicate corresponding or analogous elements.
  • FIG. 1 depicts a simplified block diagram of at least one embodiment of an iris processor for biometric iris matching, including a pre-processor as disclosed herein;
  • FIG. 2 depicts a simplified block diagram of at least one embodiment of the pre-processor of the iris processor of FIG. 1;
  • FIG. 3A depicts a simplified graphical plot illustrating an effect of camera illumination on pupil and iris intensity as disclosed herein;
  • FIG. 3B depicts an illustration of a result of the operation of the pre-processor of FIG. 2;
  • FIG. 3C depicts an illustration of another result of the operation of the pre-processor of FIG. 2, with an alternate image;
  • FIG. 3D depicts a simplified illustration of yet another result of the operation of the pre-processor of FIG. 2, with yet another alternate image;
  • FIG. 4A depicts a simplified flow diagram for at least one embodiment of a method for edge detection, which may be performed by the iris processor of FIG. 1;
  • FIG. 4B shows simplified examples of candidate pupil contour curves as disclosed herein;
  • FIG. 4C depicts a simplified flow diagram for at least one embodiment of a method for corneal distortion correction, which may be performed by the iris processor of FIG. 1;
  • FIG. 4D illustrates a simplified result of correction for foreshortening as disclosed herein;
  • FIG. 5 depicts a simplified block diagram of at least one embodiment of a coding processor as disclosed herein;
  • FIG. 6 depicts a simplified example of at least one embodiment of a multiresolution iris code as disclosed herein;
  • FIG. 7 depicts a simplified block diagram of at least one embodiment of a matching processor as disclosed herein;
  • FIG. 8 depicts a simplified example of at least one embodiment of a process for matching iris codes, which may be performed by the matching processor of FIG. 7;
  • FIG. 9 is a simplified schematic depiction of a coarse-fine algorithm to estimate flow-field of an iris code, as disclosed herein;
  • FIG. 10 is a simplified flow diagram depicting at least one embodiment of a method for estimating flow field between two iris codes, as disclosed herein;
  • FIG. 11 is a simplified flow diagram depicting at least one embodiment of a method for estimating flow field between two iris codes as disclosed herein;
  • FIG. 12 depicts a simplified schematic diagram of at least one embodiment of a computer system for implementing the iris processor of FIG. 1, as disclosed herein;
  • FIG. 13 illustrates at least one embodiment of the iris processor of FIG. 1 in an exemplary operating scenario, as disclosed herein;
  • FIG. 14 is a simplified assembled perspective view of at least one embodiment of an iris biometric recognition module;
  • FIG. 15 is an exploded perspective view of the iris biometric recognition module of FIG. 16;
  • FIG. 16 is a simplified schematic diagram showing components of an iris biometric recognition module and an access control module in an environment of the access control assembly of FIG. 14;
  • FIG. 17 is a simplified flow diagram of at least one embodiment of a method for performing iris biometric recognition-enabled access control as disclosed herein, which may be performed by one or more components of the iris biometric recognition module of FIG. 13;
  • FIG. 18 is a simplified block diagram of at least one embodiment of a system including an iris biometric recognition module as disclosed herein;
  • FIG. 19 is a simplified view of at least one embodiment of an iris biometric recognition enabled access control assembly in an exemplary operating environment (i.e., a mobile device); and
  • FIG. 20 is an exemplary flowchart depicting a method of collecting and targeting market data and information based upon iris identification.
  • DETAILED DESCRIPTION OF THE DRAWINGS
  • While the concepts of the present disclosure are susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and are described in detail below. It should be understood that there is no intent to limit the concepts of the present disclosure to the particular forms disclosed. On the contrary, the intent is to cover all modifications, equivalents, and alternatives consistent with the present disclosure and the appended claims.
  • Referring now to FIGS. 1-13, FIGS. 1-13 relate to subject matter that is shown and described in U.S. Utility patent application Ser. No. 14/100,615, filed Dec. 9, 2013, and U.S. Utility application Ser. Nos. 14/509,356 and 14/509,366, both filed Oct. 8, 2014.
  • FIG. 1 depicts a block diagram of an iris processor 100 for biometric iris matching in accordance with exemplary embodiments of the present invention. The iris processor 100 comprises a pre-processor 102, a coding processor 104 and a matching processor 106. The iris processor 100 receives images as input, for example, input image 101 and outputs a matched iris 108 from a remote or local database. Those of ordinary skill in the art would recognize that the database may be accessed as a “cloud” service, directly through an internet connection, or the like. The pre-processor 102, the coding processor 104 and the matching processor 106 may execute on a single device (e.g., within a software application running on, for example, a mobile device, having captured the images via a camera and/or means of illumination integrated into the mobile device), or on different devices, servers, cloud services or the like, as indicated by the dashed outline of the iris processor 100. The iris processor 100 may be modular and each processor may be implemented, e.g., on a single device, multiple devices, in the cloud as a service. Any of the components, e.g., the pre-processor 102, the coding processor 104, and the matching processor 106, may be implemented or used independently of one another.
  • According to exemplary embodiments of the present invention, the input image 101 is an infrared image, and is captured by an infrared capture device (not shown in FIG. 1), coupled to the iris processor 100. The infrared capture device may be any type of infrared capture device known to those of ordinary skill in the art. In other instances, the input image 101 is a red, green, blue (RGB) image, or the like. The input image 101 contains an eye with an at least partially visible iris and pupil and the iris processor 100 attempts to match that eye with an iris of an eye image in a local or remote database of eye images. According to exemplary embodiments, irises are matched based on Hamming distances between two coded iris images.
  • Initially, the input image 101 is processed by the pre-processor 102. The pre-processor 102 segments and normalizes the iris in the input image 101, where input image 101 may have variable iris/pupil and iris/sclera contrast, small eyelid openings, and non-frontal iris presentations. The result of the pre-processor 102 is a modified iris image with clearly delineated iris boundaries and synthesized quasi-frontal presentation. For example, if the iris in the input image 101 is rotated towards the left, right, up or down, the pre-processor 102 will synthesize an iris on the input image 101 as if it was positioned directly frontally. Similarly, a frontally positioned pupil will be synthesized on the skewed or rotated pupil of the input image 101.
  • The coding processor 104 analyzes and encodes iris information from the iris image generated by the pre-processor 102 at a range of spatial scales so that structural iris information contained in the input image 101 of varying resolution, quality, and state of focus can be robustly represented. The information content of the resulting code will vary depending on the characteristics of input image 101. The code generated by the coding processor 104 representing the input image 101 allows spatial interpolation to facilitate iris code alignment by the matching processor 106.
  • The output code from the coding processor 104 is coupled to the matching processor 106. The matching processor 106 incorporates constrained active alignment of iris structure information between stored iris images and captured iris codes generated from the input image 101 to compensate for limitations in iris image normalization by the pre-processor 102. The matching processor 106 performs alignment by performing local shifting or warping of the code to match the generated code with a stored iris code template based on estimated residual distortion of the code generated by the coding processor 104. According to some embodiments, a “barrel shift” algorithm is employed to perform the alignment. Accordingly, structural correspondences are registered and the matching processor 106 compares the aligned codes to determine whether a match exists. If a match is found, the matching processor returns matched iris data 108.
  • The matched iris data 108 may be used in many instances, for example, to authenticate a user in order for the user to gain access to a secure item (e.g., a safe, safety deposit box, computer, etc.), authenticate a user to access applications or wireless communication within a computing device, such as a mobile device (e.g., authenticating a user in order to transmit instructions from the user's mobile device to an automated teller machine (ATM) in order to complete a financial transaction), authorize financial transactions and/or collect, analyze and display an identify of a user in order to deliver targeted marketing to the user, as described in detail below. The pre-processor 102 may be an application executing on any device, for example, a mobile device, such as a mobile phone, camera, tablet, forward or rear facing camera integrated into a mobile phone or tablet, a display or marquee, or the like. The pre-processor 102 on the device may capture an image of a user's eye using the camera of the device, perform the pre-processing steps on the device, and then transmit a bundled and encrypted request to the coding processor 104, which may be accessed via a cloud service on a remote server. In other embodiments, the application may be part of or associated with a display or marquee, which may comprise the coding processor 104 and the iris coding is performed at the display or marquee.
  • The iris processor 100 may be used, for example, for collecting and targeting of marketing data based upon iris identification. For example, a customer in a grocery store can be detected and their iris can be stored in a local or remote database. If the customer enters the grocery store again, or an associated store with which the iris information is shared, the store can build a profile of the customer, the items they most often purchase, peruse, or the like by using iris detection and gaze tracking. These marketing profiles can be used by the store itself for product placement, or may be used by third party marketing services as marketing data. In other embodiments, the customer profile can be matched with identifying information, and when the customer uses a website affiliated with the store, or a website, which has access to the iris data, the website identifies the customer and offers targeted marketing to the customer.
  • In an exemplary embodiment, the iris processor 100, as well as iris biometric recognition module 1514, described in more detail below, may be used to collect iris biometric authentication data from, as well as display targeted marketing data to, specific users. In some embodiments, the iris biometric recognition module 1514 may be positioned in a public area (e.g., as an electronic billboard or marquee) such as a bus, subway stop, or any other public area, possibly incorporated into a camera or video advertisement display, as non-limiting examples, and collect biometric iris data from each subject (e.g., a passerby or a subject in the vicinity of the advertisement) from which the iris biometric recognition module 1514 is able to collect iris biometric data, possibly to be used as templates, for each subject passing or in the vicinity of the camera or video advertisement display. These biometric identification templates may be stored in a local, remote and/or cloud database, in association with the subject's identity, if the identity of the person is known. In addition, the iris biometric recognition module 1514 and/or hardware (e.g., camera, video advertisement display, illuminator, etc.) and/or software coupled to the iris biometric recognition module 1514, may identify the location of the subject and the time that the subject was recognized, and store this data in association with the biometric identification of the subject.
  • Similarly, in embodiments where the iris biometric recognition module 1514 is integrated into a mobile device, the iris biometric recognition module 1514 and/or the coupled hardware and/or software may identify specific applications accessed by a user, a time when the mobile device was unlocked and/or when the application was accessed, or any other information that would track habits of a user.
  • Additional data may be associated with the subject or user as well, possibly including, as non-limiting examples, personal habits (e.g., person rides the bus or subway, person takes the subway every morning or evening at a particular time, person takes the bus every morning or evening at a particular time), or hobbies (e.g., subject enjoys running, jogging or walking). This additional data may be stored in the same database as the biometric iris identification templates or in a separate database.
  • In some embodiments, the hardware coupled to the iris biometric recognition module 1514 may include one or more electronic video advertisement displays, possibly set up in public places such as airports, malls or shopping centers, stadiums, subways, bus terminals, casinos, amusement parks, childcare facilities, cruise ships, detoxification centers, drug testing collection centers, entertainment facilities, health clubs, gyms, spas, hospitals, hotels, motels, medical labs or facilities, on-sit or off-site testing facilities, pharmacies, ski lifts, sporting events or centers, tradeshows, conferences, conventions, transit centers, or any other public place(s). In some embodiments, the camera and illuminator, coupled to the iris biometric recognition module 1514 and described below, may be integrated into the electronic video advertisement display. In other embodiments, the camera and/or illuminator may be mounted on the electronic video display advertisement.
  • Upon capturing the iris of a subject, the system attempts to match the iris of the subject with iris identification templates in the database. Data associated with the subject may be accessed upon matching of the iris with an iris identification template, and advertisements appropriate for the subject may be displayed on the electronic video display. For example, if it is determined that a subject or group of subjects, each of whom takes the subway each morning at 7 AM are all interested in running or jogging, the electronic video display may include advertisements for running shoes at that particular time. Additional embodiments may exist where an electronic advertisement specific to the subject is displayed on any screen, including airplane screens, gas station screens, and/or any other public or private screen.
  • The system may additionally include logic to determine different categories of subjects that may be in viewing distance from an electronic video display during different periods of time throughout an hour, day, month, year, or any other suitable period of time. In some exemplary embodiments, the logic may select different advertisements from a selection or database of advertisements based on the category of subjects in viewing distance from the electronic display at a particular time. In other exemplary embodiments, different versions of the same advertisement (e.g., an apparel store advertisement with different apparel on the different advertisements based on different categories of subjects) may be selected based on the category of subjects in viewing distance from the electronic display at a particular time.
  • In embodiments where the iris biometric recognition module 1514 is integrated into or coupled to a mobile device and the user is accessing the mobile device, software modules within the device may customize the user experience to each recognized user. So, for example, if a particular user always unlocks the mobile device (possibly at a certain time of day), and that user is authenticated via the biometric recognition module 1514, the user experience may be customized to that user. Likewise, specific applications accessed by that user may be provided and/or may provide customized advertising (e.g., in-app advertising specific to the user, such as running shoe advertisements if the user enjoys running or jogging). In some exemplary embodiment, the user experience may be customized in a first fashion for a first user, for example, a parent may be allowed access to particular applications, and the user experience may be customized in a second fashion for a second user, for example, a child may have access to different applications or a subset of applications.
  • In some embodiments, the iris biometric recognition module 1514 may be used to determine a presence of subject at a specific time, verify the presence of the person in close proximity to the video display device, determine which advertisements receive the most attention from the user, and the like. Analysis of this collected data may be used to guide marketing decisions.
  • In summary, as seen in FIG. 20, an iris scanning system may be installed in a public location, for example, co-located with an advertisement (Block 2200) or on, for example, a mobile device. When an individual looks at the advertisement or other content on the mobile device (Block 2202), iris images may be collected (Block 2204). These images may be compared to a historical database (e.g., templates in a historical database) (Block 2206). If no match for the iris is found, the hardware/software may collect and store data and/or other information, and there would be no change in the advertisement (Block 2208). However, if a match is found, the advertisement may be tailored to the individual (Block 2210), if possible, and the tailored advertisement may be displayed and additional subject data and/or information may be collected (Block 2212). If it is not possible to tailor the advertisement to the individual, the advertisement will not be changed and additional subject data and/or information for the individual may be collected (Block 2208).
  • In the medical field, the iris processor 100 may be used to determine whether a person accessing particular medical resources, such as medicine, devices, or the like, are permitted to access these resources. The iris processor 100 can be coupled with a recording device, which captures video of those accessing a medicine cabinet, for example, and whether they are authorized to take medical resources from the cabinet.
  • The iris processor 100 may be used as a security system and authentication device by a small company with limited resources. By simply coupling a camera or other image capturing device to an electro/mechanical locking system, the company can limit access to doors, offices, vaults, or the like, to only authorized persons. The iris codes produced by the coding processor 104 can be used to authorize, for example, airline boarding passes. On purchase of a travel (airline, train, bus, etc.) ticket, the coding processor 104 generates an iris code of the purchaser and saves the iris code for imprinting on the boarding pass. When a traveler is boarding an airplane, bus or train, the carrier may invoke the matching processor 106 to match the iris code on the boarding pass with the iris code produced by the traveler presenting the boarding pass. If there is a match, the traveler is allowed to board the bus, train or airplane.
  • In summary, the iris processor may be used in any context in which the user needs to be authenticated, including any situation in which the user wants physical or electronic access to a device or data accessible via the device.
  • FIG. 2 depicts a block diagram of the pre-processor of the iris processor 100 in accordance with exemplary embodiments of the present invention. The pre-processor receives the input image 101 and outputs a rectified iris image 220. The rectified iris image 220 corrects for uncontrolled capture scenarios such as ambient illumination conditions, varied illumination geometries, reduced eyelid opening area, presentation angle (obliquity), or the like. The rectified iris image 220 corrects for various nonconformities.
  • The pre-processor 200 comprises a segmentation module 202 and a correction module 204. The segmentation module 202 further comprises a pupil segmentation module 206, an iris segmentation module 208 and an edge detection module 209. The segmentation module 202 corrects an input image for low-contrast pupil and iris boundaries. The image produced by the segmentation module 202 is then coupled to the correction module 204 for further correction. The correction module 204 comprises a tilt correction module 210 and a corneal correction module 212. The details of the segmentation module 202 are described below.
  • FIG. 3A illustrates that varying illumination geometry produces varying pupil appearance. FIG. 3A illustrates measurement of pupil-iris intensity difference as a function of distance, e.g., 1 and 2 meters, pupil size, e.g., 2.4 mm and 4.0 mm, and camera/illuminator distance, e.g., 6 to 16 cm. As the camera/illuminator distance increases, the pupil iris intensity decreases. The contrast of the pupil varies greatly as a function of distance between camera and subject as well as functions of illuminator geometry and pupil diameter. The variation with distance is due to the fact that the angular distance between the illuminator and camera axes are greater at short range (e.g., 1 m) than at longer distances. As the illuminator and camera axes get closer, more light that is reflected from the retina back out through the pupil is captured by the camera lens. This causes red eye in ordinary photographs and bright pupils in infrared photography. An exemplary illuminator is described in U.S. Pat. No. 7,542,628 to Matey entitled “Method and Apparatus for Providing Strobed Image Capture” filed on Jan. 19, 2006, and U.S. Pat. No. 7,657,127 to Matey entitled “Method and Apparatus for Providing Strobed Image Capture” filed on Apr. 24, 2009, each of which is incorporated herein by this reference in its entirety.
  • The segmentation module 202 and the correction module 204 may be used, for example, in the medical field, in targeted marketing, customer tracking in a store, or the like. For example, pupil and iris insertion may be performed by the pre-processor 102, as described further with respect to FIGS. 2 and 3A-3D, in the medical field as a diagnostic tool for diagnosing diseases that a person might have based on their iris profiles.
  • FIG. 3B illustrates an example of iris and pupil boundary matching in accordance with exemplary embodiments of the present invention. According to some embodiments, iris diameters are normalized by the iris segmentation module 208. Size normalization is performed using a range estimate derived from an autofocus setting of the camera taking the image. The image 300 shows the pupil boundary 304 calculated by the pupil segmentation module 206. The pupil segmentation module 206 then inserts an artificial dark pupil in the pupil boundary 304 in image 300. Image 300 is then coupled to the iris segmentation module 208, which calculates the iris boundary. FIGS. 3C and 3D illustrate examples of inserted artificial pupils and iris boundaries. In FIG. 3C, input image 320 is coupled to the pre-processor 200. The input image 320 is then segmented by pupil segmentation module 206 to calculate a pupil boundary region 326. The pupil segmentation module then inserts an artificial black colored pupil in the pupil boundary region 326. Additionally, oblique irises and pupils are warped to be circular. The insertion of an artificial pupil in the pupil boundary region 326 may be used, for example, to remove red-eye effects in an image captured by a camera. The segmentation module 202 can be used to segment the pupil and iris areas, and the pupils may be red-eye corrected by insertion of the artificial pupil. This process of segmentation and warping is described in more detail below.
  • FIG. 3D shows a similar process but on a downward facing iris in image 350. The pupil boundary 356 is still detected despite being occluded by the eyelid in image 352. The pupil and iris are both warped to form circular regions to aid in segmentation. The pupil segmentation module 206 inserts a black disk/artificial pupil in the image 352 and couples the image 352 to the iris segmentation module 208. The iris segmentation module 208 determines an iris boundary 358. Ultimately, the iris and pupil boundaries are corrected for various lighting conditions and presented in image 354, where region 360 can be seen with the artificial pupil. According to some embodiments, the artificial pupil need not be necessarily black and may be another suitable color, based on compatibility with third party iris recognition software.
  • The pupil boundaries, for example, 304, 326 and 356 and the iris boundaries (iris/sclera boundary areas), for example, 306, 328 and 358 are calculated using a Hough transform, according to one embodiment. The pupil segmentation module 206 and the iris segmentation module 208 employ edge detection using the edge detection module 209 to generate edge maps which works for varying scales of grayscale pupils, even in instances with low edge contrast. Once the pupil segmentation module 206 determines the segmented pupil area (and therefore, the pupil contour) and the pupil and iris have been warped to form circular regions, the segmented pupil area is replaced with a black or dark disk to simulate the appearance of a dark pupil.
  • FIG. 4A depicts a flow diagram for a method 400 for edge detection in accordance with one embodiment of the present invention. The method 400 is an exemplary illustration of the operation of the edge detection module 209 used to detect pupil and iris boundaries.
  • The method begins at step 402 and proceeds to step 404. At step 404, an edge map is generated from an image of an eye, for example, input image 101. An exemplary edge map for an iris image which was brightly illuminated is shown in FIG. 48, image 420. Image 422 is an edge map for an iris image which was not as brightly illuminated, i.e., an indistinct pupil whose edges are not as clearly visible as those in image 420.
  • At step 406, candidate pupil contours are constructed for the given edge map. Step 406 consists of sub-steps 406A and 4068. At sub-step 406A, a first candidate pupil contour is created from a best fitting circle, as shown in FIG. 48, image 420. For example, a Hough transform or RANSAC (random sample consensus) method can be used to find the circle that has the greatest level of support in the edge map in the sense that the largest fraction of circle points for that circle coincide with edge points. At step 4068, a second candidate pupil contour is constructed from a best inscribed circle as shown in FIG. 48, image 422. Those of ordinary skill in the art would recognize that an inscribed circle is a circle that can be drawn in an area/region of the edge map so that no edge points (or no more than a specified small number of edge points) lie within the circle. According to one embodiment, the best inscribed circle is the largest such inscribed circle that can be found in the area/region of the pupil. Then method then proceeds to step 408, where the method 400 determines the best matching candidate pupil contour from the first and second candidate pupil matching contours for the edge map. According to one embodiment, the best match is determined by assessing a level of support for the best fitting circle and selecting the best fitting circle as the best match if this level of support is above a threshold value. The best inscribed circle is selected as the best match if the level of support for the best fitting circle is below a threshold value.
  • According to one embodiment, an automatic process based on how well the best fit contour (circle) matches the edge contour in the edge contour map is used to decide which candidate contour to choose. For example, for the best supported circle described above, a subset of edge points can be selected that is limited to those edge points whose angular orientation is consistent with that edge point being a part of the candidate circle. In other words only edge points whose direction is approximately perpendicular to the direction from the estimated center of the candidate circle are included. This process eliminates from consideration those edge points that may accidentally fall at the correct position to be part of the circle but that do not correspond to the actual circle contour. If the proportion of such selected edge points is greater than some specified fraction (e.g. 20%) of the number of points comprising the circle then the level of support for that circle is deemed to be sufficient and the best fitting circle is selected. If the level of support by the selected edge points is less than this threshold then the best fitting circle is deemed to have insufficient support and the best inscribed circle is selected instead. Generally speaking, the best fit candidate contour will provide accurate pupil segmentation in the bright pupil image, as shown in FIG. 48, image 420, where the bright colored eye edge map is overlayed with the best-inscribed circle 430 and the best fitting circle 432. The method then terminates at step 412 when a best matching candidate pupil contour is found.
  • In some instances, iris images may be captured over a range of oblique viewing conditions, for example, where gaze deviation with nasal gaze angles ranges from 0 to 40 degrees, as shown in FIG. 3D. The tilt correction module 210 rectifies the images for this tilt and generates a tilt corrected image. According to one embodiment, a tilt-corrected image may be generated by estimating or determining the magnitude and direction/angle of tilt, and then applying a geometric transformation to the iris image to compensate for the oblique viewing angle. In the case where the iris is a flat disk, the simplest form of this transformation is a stretching of the image in the direction of the tilt to compensate for the foreshortening caused by the angle between the iris and the image plane. Such a non-isotropic stretching is mathematically represented as an affine transformation. A more accurate version of this geometric de-tilting replaces the affine transformation with a projective transformation which better represents the image representation of a pattern on a flat, tilted surface.
  • The correction module 204 has several uses independent of the other components of the iris processor 100. For example, the correction module 204 may be used to detect a person's gaze, or to track a person's gaze continuously by capturing one or more frames of a person's eyes. The tilt correction module 210 may, for example, be used to continuously track a user's gaze on a mobile device and scroll a document, perform a swipe or the like. This tilt detection can be used, for example, independently of the matching processor 106 described in FIG. 1 to enable or disable the display of a mobile device.
  • In some embodiments, the correction module 204 corrects the input image 101 prior to the segmentation module establishing artificial pupil discs on the input image 101. In some instances, tilt correction may still show distortions such as the apparent eccentric pupil compression of the nasal portion of the iris, causing difficulty in biometrically matching the iris with a stored iris image. The distortion is caused by the optical effect of the cornea and anterior chamber of the human eye through which the iris is imaged. These two structures have similar refractive indexes (1.336 for the aqueous humor that fills the anterior chamber and 1.376 for the cornea) so that together their optical effect is approximately that of a single water-filled plano-convex lens in contact with the iris. Viewed from an oblique angle such a lens will produce asymmetric distortion in the iris image, compressing the image in some areas and expanding it in others. The tilt corrected image generated by the tilt correction module 210 is coupled to the corneal correction module 212, which corrects for the above described corneal distortion.
  • FIG. 4C depicts a flow diagram for a method 440 for corneal distortion correction in accordance with exemplary embodiments of the present invention. The method 400 is an exemplary illustration of the operation of the edge detection module 209. The method begins at step 402 and proceeds to step 404. At step 404, the tilt correction module 210 estimates the angle of tilt of the iris with respect to the camera orientation. The tilt can be estimated roughly by finding the pupil center and measuring the distance between that center and the bright reflection in the cornea caused by the near infra-red illuminator used in iris imaging. Other methods of tilt estimation known to those of ordinary skill in the art may also be used. Indeed, any method of tilt estimation may be substituted herein.
  • The method proceeds to step 406, where the image is corrected for the perspective distortion, i.e., the foreshortening of the iris that occurs. The effect of foreshortening can be approximated as a simple compression of the captured image in the direction or tilt. This effect can therefore be compensated for by simply stretching the image in the direction derived from the tilt estimation step. A more accurate correction can also be performed by using a projective transformation to more precisely capture the foreshortening effect.
  • Finally, at step 448, the method 400 corrects for effects of optical distortion due to viewing through the tilted cornea. According to one embodiment, approximate correction for the optical distortion discussed above can be achieved by measuring and correcting the effects of pupil eccentricity and pupil elongation. The method terminates at step 450.
  • As seen in image 460 in FIG. 4D, after foreshortening correction based on tilt estimation, the pupil still appears shifted to the left with respect to the center of the iris and the pupil appears elongated in the horizontal direction. These effects are caused by the optical effects of the cornea. The corneal correction module 212 corrects for these distortions without modeling the optical elements that produced them by non-linearly warping the iris area/region to force the iris contour 466 and pupil contour 468 to become concentric circles. The corneal correction module 212 creates this nonlinear warping function by defining a set of spokes 470 that connect points on the non-circular pupil contour 468 to corresponding points on the non-circular iris/sclera contour 466 and mapping each spoke of the spokes 470 to a position connecting a synthetic circular pupil contour 472 to a concentric circular iris/sclera contour 474. The described transformation is then applied to the underlying image 460. The result of this mapping (with appropriate interpolation) is shown in image 476. After the pupil and iris areas/regions have been shifted to be in concentric circles, the coding process can be more accurately performed with better matching results.
  • After such a corrected image is constructed as described above, iris coding and matching can be performed using any desired iris biometric algorithm designed to be applied to iris images captured under standard controlled conditions. For example, the classic method of Daugman (Daugman, J., “High confidence visual recognition of persons by a test of statistical independence”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 15 (11), pp 1148-1161 (1993)) can be applied. However, methods developed by others can also be used, including but not limited to those of Munro (D. M. Monro and D. Zhang, An Effective Human Iris Code with Low Complexity, Proc. IEEE International Conference on Image Processing, vol. 3, pp. 277-280, September 2005) and Tan (Tan et al, Efficient Iris Recognition by Characterizing Key Local Variations IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 13, NO. 6, JUNE 2004).
  • FIG. 5 depicts a block diagram of a coding processor 500 in accordance with exemplary embodiments of the present invention. The coding processor 500 comprises a coordinate module 502 and an extraction module 506. The coordinate module 502 constructs an invariant coordinate system for an invariant coordinate system image representation that allows iris information extracted from varying iris images to be brought into register, so that corresponding spatial information can be compared. The extraction module 506 extracts information from the iris image for supporting a strong rejection of the hypothesis that two eye images presented represent statistically independent patterns. The coding processor 500 prepares the segmented and corrected iris image 220 for accurate matching with other iris images and allows unconstrained iris capture applications. For example, image size and focus may vary with distance, in addition to individual iris structure variations and variation with illumination wavelength of spatial information content of an iris structure. Generally, iris coding is based on angular frequencies between about 15 and 40 cycles/2pi or 2.5 and 6 pixels per cycle, where according to one embodiment, the present application achieves robust matching based on the codes generated by the coding processor 500 down to approximately 40 pixels per iris diameter.
  • According to one embodiment, the coding processor 500 uses a variant of Daugman's local phase representation, which encompasses a multi-resolution coding approach rather than choosing a single scale of analysis. Lower frequency components remain available in lower resolution images and are less prone to loss in defocused or otherwise degraded images. In one embodiment, the variant of Daugman's local phase representation allows for dense coding that is useful when dealing with iris images in which significant occlusion may occur. Although the robust segmentation and rectification process described above generates corrected iris images that can be used with a variety of iris coding and matching algorithms, there are advantages in some situations to retaining properties of standard algorithms. One advantage of the Daugman type phase coding approach is that it generates a code that represents all available parts of the iris images. This is in contrast to an approach that uses sparse local features that might be occluded or otherwise unavailable in a particular image to be matches. Further, the use of multiresolution phase approach preserves the possibility of achieving code-level compatibility with existing phase-based representations. In addition to containing multi-scale information, the code that is created can incorporate additional information to facilitate estimation of iris code alignment and spatial interpolation of local structure information prior to comparison.
  • As shown in FIG. 5, the coding processor 500 comprises the coordinate module 502. The coordinate module 502 transforms the rectified iris image 220 into a polar iris image 504. In this polar iris image 504 the pupil boundary appears at the top (notice the specular reflection of a biometric scanner illuminator column) and the iris-sclera boundary area appears at the bottom. The angular dimension runs clockwise from 3 o'clock at the left of the image. Proceeding from left to right, the lower and upper eyelids can be seen. Note that in image 504 the eyelashes extend from the upper eyelid all the way into the pupil.
  • Subsequently, after converting the rectified iris image into a polar coordinate image, the image 504 is coupled to the extraction module 506 that filters and subsamples the polar iris image 504 to produce a multi-resolution iris code representation 520, an example of which is shown in FIG. 6. According to an exemplary embodiment, the image 504 is passed through a series of bandpass filters to produce a set of filtered images. FIG. 6 shows an example of a polar iris image 620, being filtered by filters 121 (Filters 1 . . . 5) and producing an iris code 622 comprising filtered bands 600, 602, 604, 606 and 608, respectively high-frequency domain bands to low frequency domain bands. The five bands shown correspond to Gabor filter (a linear filter used for harmonic analysis, wavelet decompositions, and edge detection) carrier wavelengths of 6, 8, 12, 16, and 24 pixels with respect to a polar image sampled at 200 pixels around the iris. Therefore, the frequencies correspond approximately to angular spatial frequencies of 33, 25, 16, 12, and 8 cycles per 2pi.
  • The higher frequencies are comparable to those used in standard iris matching algorithms. The mask 610 is the union of two masks: a mask (common to all bands) based on analysis of the intensities in the input polar iris image 504 that masks off area corresponding to specular reflections and approximate location of eyelid and eyelash areas, and a mask based on the signal strength in the Gabor filtered image that masks off areas in which local phase measurement is unstable (unstable regions). Multi-resolution representation as shown in iris code 622 allow representation of information from images at different camera-subject distances that result in iris images differing in number of pixels per unit distance at the iris as well as oblique camera views causing foreshortening and optical demagnification, as discussed above with reference to FIGS. 2-4D.
  • Other properties of an iris code representation 520 include a complete description of the filter characteristics, spatial sampling, representation and quantization. Filter characteristics comprise one or more of center frequencies, bandwidths, functional type (e.g. log Gabor), and orientation tuning. Spatial sampling comprises one or more of spacing along the radial and angular normalized image axes for each filter type, and quantization specifies the number levels with which each value is represented or number of bits assigned to each. According to exemplary embodiments, the iris code representation 520 and exemplary iris code 622 is a warpable code allowing for interpolation by using sub-Nyquist spatial sampling requirements for each filter 1 . . . 5 in filters 621 that produces provide a criterion for sufficient sampling for accurate interpolation. The sub-Nyquist spatial sampling is combined with a finer intensity quantization than the 1 bit per complex phase component used in Daugman-type coding. For example, if 4 bits are used for each complex phase component this corresponds to roughly 64 steps in phase angle and thus a maximum interpolation error of pi/32 radians or less than six degrees.
  • In some embodiments, non-quantized iris codes may also be matched, where original complex band-pass filter outputs are stored without quantization. In one embodiment, the filter outputs are normalized in magnitude so that each represents a complex number on the unit circle. Data masks are generated based on occlusions and local complex amplitude. The match measure that is the closest analog of the standard Hamming Distance measure of a Daugman iris code is based on a phase difference histogram. This histogram constructed by computing the angles between the phase vectors of the two codes being compared (see FIG. 6), and compiling a histogram (subject to the valid data mask) of phase differences between −pi and pi. These phase differences should be small if the codes represent the same eye and more or less uniformly distributed if the codes represent statistically independent eyes.
  • An example of two such histograms is shown in FIG. 7. The histogram on the left corresponds to an impostor match and the one on the right to an authentic match. As expected, the authentic distribution is tightly concentrated around a zero phase shift with only a small proportion of the phase difference values larger than pi/2 in absolute value. In contrast, the impostor histogram shows many large phase differences and no clear evidence of concentration around zero value. The fraction of values larger than pi/2 can be used to generate a match statistic that behaves very much like Daugman code Hamming distance if this is desired. However, there are many other measures of central concentration and dispersion that may be used to distinguish between authentic and impostor distributions, as will be described below. Furthermore, give sufficient training sets of impostor and authentic histograms it may be beneficial to use statistical classification or machine learning techniques such as discriminant analysis, Support Vector Machines, Neural Networks, or Logistic Regression to construct an optimal decision procedure for some class of data.
  • Measurements of the central value of a phase difference histogram, and of the dispersion around that point takes into account the fact that the phase differences are angles and therefore the histogram is distributed on a closed circle. Ordinary mean and variance measures (or higher moments if necessary) do not correctly represent the desired properties for angular data. The Von Mises distribution provides a well characterized method for estimating properties of data distributed over a periodic domain. The Von Mises mean gives an estimate of the center of concentration of the distribution and the concentration parameter and estimate of the spread. Both quantities can be computed easily if the phase differences are represented as unit complex numbers. In this case, the mean estimate is simply the angle corresponding to the sample mean of the complex numbers, and the concentration parameter is simply related to the complex magnitude of the sample mean.
  • According to another embodiment, data is analyzed over a periodic domain by employing a Fourier series expansion to compute circular harmonics. Like the Von Mises parameters, the relative magnitude low order circular harmonics give information about degree of concentration of the data. Transformation of the histogram data using circular harmonics is beneficial prior to use of learning techniques to construct a decision procedure.
  • The phase difference histogram aids in analysis of the match level between two codes but does not represent all of the information relevant to the comparison of two codes. If the phase difference value varies as a function of the absolute phase then the histogram shows low concentration (i.e. large dispersion) even given a strong relationship. According to one embodiment, a Mutual Information or other conditional entropy description is employed to prevent this problem, which measures the reduction in the entropy of one random variable given knowledge of the value of another random variable. This more complete characterization can detect relatedness even where the variables are uncorrelated.
  • Another limitation of the phase difference histogram is that it completely suppresses spatial information since the histogram is a global statistic. However, local or patchwise uniformity of phase differences or other detectable relatedness would also be sufficient to conclude that the codes are not independent. This local analysis could be achieved using local histogram analysis, mutual information, or spatial correlation analyses.
  • FIG. 7 depicts a block diagram of a matching processor 700 in accordance with exemplary embodiments of the present invention. The matching processor 106 comprises an alignment module 702 and a flow estimation module 704. According to exemplary embodiments, the iris code 520 generated by the coding processor 500 as shown in FIG. 5 is coupled to the alignment module 702. The alignment module 702 performs various alignments to the iris code 520 based on matching algorithms described below. The alignment module 702 further couples the iris code 520 to the flow estimation module 704 to generate estimated flow vectors to aid in matching. The alignment module 702 compares the iris code 520 to an iris code 706 from database 708 to determine whether a match exists. If a match does not exist, more iris codes from the database 708 are compared with the iris code 520. Match scores are determined, and if the match score meets or is below a predetermined threshold, then a match exists. According to exemplary embodiments, a Hamming distance is used as a match score. Ultimately, the matched iris data 108 is returned by the matching processor 700. According to some other embodiments, flow estimation is applied to information derived from the unknown iris code 520 and the stored iris code 706. This information may be part of the iris code 520 per se or it may not. The resulting flow field from the flow estimation module 704 is used to generate a modified iris code that is matched against a reference iris code by the matching processor 700 to produce a match score 720.
  • In a binary context, i.e., comparing iris codes, a Hamming distance represents a binary distance based on XOR operations to computes the number of bits that differ between two binary images. According to exemplary embodiments, the alignment module 702 performs a Daugman barrel shift on the iris codes, i.e., finds the iris code rotation that provides the best match between the iris codes being compared. In one embodiment, the matching algorithm employed by the matching processor 700 is a modified algorithm using the Hamming distance (HD) for each set of barrel shift positions and taking the lowest Hamming distance as the score for that pair of codes. If the score is below some threshold (that may be adjusted based on the estimated number of statistical degrees of freedom represented by the codes) then the unknown code is deemed to be a match. If the HD is above the threshold then the unknown code is labeled an impostor. In one embodiment, the threshold depends on details of the iris code structure and on the statistical requirements of the matching scenario.
  • The modified algorithm employed by the alignment module 702 barrel shifts the iris codes being compared and also locally aligns the iris codes to each other to compensate for inaccuracies in iris image normalization due to uncorrected optical distortion or complexities of iris dilation and contraction. The local alignment function, performed by alignment module 702, allows compensation for distortions in the input iris image that are not uniform across the iris. This is accomplished by shifting local regions of the code to bring them into more accurate alignment with corresponding regions of the reference code. However, if this process is performed using very small estimation regions, virtually any iris code can be made to match any other iris code, which can result in false matches being generated. This false matching problem can be avoided by imposing suitable smoothness conditions on the estimated flow field. For example, if the flow field is estimated by performing local translation estimation using relatively large estimation regions then the local flow estimates will represent the average motion over this relatively large region.
  • If such region overlaps, so that the regions used to compute the flow vectors for neighboring locations contain much of the same content, then the displacement estimates will change gradually with position and false matching will be prevented. Alternatively, local displacement estimates made with small estimation regions can be smoothed by spatial filtering to eliminate rapid changes in local displacement. As a further alternative, a global parametric representation such as a low order polynomial or truncated Fourier series can be used, and the parameters of this parametric representation estimated directly or fit to local estimates. Such parametric representation has inherent smoothness properties that prevent too rapid change in local shifts to occur. The alignment module 702 further produces multiple match scores for each comparison, between iris code 520 and 706 for example, because each iris code contains multiple frequency bands.
  • FIG. 8 depicts the process of matching iris codes performed by the matching processor 700 in accordance with exemplary embodiments of the present invention. As in standard iris code matching, the first code 800 and the second code 802 to be matched are represented as values over the rectified (e.g., polarized) iris image coordinate system consisting of an angular and a normalized radial coordinate. A local displacement function or flow field is computed by the flow estimation module 704 of the matching apparatus in FIG. 7 and coupled to the alignment module 702 that best aligns structure in the first iris code 800 to corresponding structure in the second code 802, subject to some smoothness or parametric constraint. This flow field estimation can include the effect of standard barrel shift alignment or that can be performed as a separate step. The vectors in this flow field each specify the displacement in the normalized image coordinate system at which the image structure in the first code 800 best matches the structure in the second code 802.
  • Each band in first iris code 800 is transformed using this displacement function to produce an aligned iris code, and the Hamming distance between this aligned iris code and the corresponding band of the second code 802 is computed. Because the transformation is constrained to be smooth, impostor codes will not be transformed into authentic codes as will be described below.
  • The flow estimation module 704 computes a flow field at a reduced resolution for each iris code, and smoothly interpolates the flow field to produce a final estimate. According to an exemplary embodiment, the flow estimation module 704 employs a pyramid-based coarse-fine flow estimation technique, though those of ordinary skill would recognize that other techniques may be used instead. The alignment module 702 introduces a small local shift in one band of each of the first iris code 800 and the second iris code 802, the shift being in the angular direction and equal at all radial positions. The displacement shift also varies smoothly in the angular direction. Calculating a Hamming distance at this point would result in a non-match (e.g., if a Daugman-type matching algorithm is employed a Hamming distance greater than 0.33 indicates a non-match). A coarse-fine algorithm is used by the flow estimate module 704 to estimate the flow field between codes 800 and 802 from the low resolution bands of the codes.
  • The alignment module 702 then warps the code 800 by the estimated flow field resulting in a significantly decreased Hamming Distance, signaling a high confidence match. For a Daugman-type matcher, a Hamming distance <0.3 indicates a high confidence match. Various matches may correspond with different Hamming distance values qualifying as high confidence matches. According to another embodiment, the matching processor 700 may match two iris codes by employing a mutual information measure based on the phase angles of the codes being compared as well as measures based on the local difference of phase angles.
  • FIG. 9 is a depiction of the coarse-fine algorithm described above to estimate flow-field of an iris code in accordance with exemplary embodiments of the present invention. Coarse-fine refinement operates on a “pyramid” structure that is essentially a collection of bandpass filtered version 904-1 to 904-N and 906-1 to 906-1 of the input images 900 and 902 respectively, as shown in FIG. 9.
  • Starting with the lowest frequency bands 904-1 and 906-1, at each level in the pyramid the displacements 908-1 to 908-N estimated at the previous level are used to warp the current level image and then an incremental displacement is computed based on the residual difference between the warped level and the corresponding pyramid level in the other image. This process continues until the highest level is reached and the result is the final estimated flow field 910.
  • Since the multi-resolution iris code is itself a collection of bandpass filtered versions of the images with which alignment is desired, according to one embodiment, these bands themselves could be used to drive the alignment process in the alignment module 702. This would produce a truly “self aligning” iris code. In this approach there is no need to store additional alignment data as part of the multi-resolution iris code structure.
  • FIG. 10 is a flow diagram depicting method 1000 for estimating flow field between two iris codes in accordance with exemplary embodiments of the present invention. The method is an implementation of the flow estimation module 704. The method begins at step 1002 and proceeds to step 1004.
  • At step 1004, the flow estimation module 704 generates a first plurality of images from a first input image (i.e., a first iris code) and a second plurality of images from a second input image (i.e., a second iris code to be matched against) using a bandpass filter, the first and second plurality of images comprising images ranging from low frequency to high frequency bands.
  • The method subsequently proceeds to step 1006, where the flow estimation module 704 selects an image from the first plurality of images in the lowest frequency band that has not been processed, i.e., for which there is no previous flow-field estimate. At step 1008, the flow estimation module 704 determines whether a flow field has been estimated in a lower frequency band between the first and second plurality of images. If a flow field has been estimated in a lower frequency band, the method proceeds to step 1010, where the selected image is warped using the lower frequency band flow field estimate. If a flow field estimate in a lower frequency band has not been estimated, then the method proceeds to step 1012, where a flow field is estimated by the flow estimation module 704 on the residual difference between the warped image and a second image at the same frequency band from the second plurality of images.
  • The method then proceeds to step 1014, where the flow estimation module 704 determines whether all frequency bands have been processed. If not, then the method returns to step 1006 to process the next higher frequency band until all frequency bands have been processed. When all frequency bands have been processed (i.e., warped by lower frequency flow field estimates), the method proceeds to step 1016, where the final flow field estimate is returned to the matching processor 700. The method terminates at step 1018.
  • FIG. 11 is a flow diagram depicting method 1100 for estimating flow field between two iris codes in accordance with exemplary embodiments of the present invention. The method is an implementation of the iris processor 100. The method begins at step 1102 and proceeds to step 1104.
  • At step 1104, the pre-processor 102 pre-processes and input image containing an eye to produce a rectified iris image with rectified pupil and iris boundaries, and correction for tilt and corneal distortion.
  • The method proceeds to step 1106, where the coding processor 104 codes the rectified iris image into a multiresolution iris code. The iris code contains multiple frequency band representations of a polarized version of the rectified iris image. The method then proceeds to step 1108, where the multiresolution iris code is compared to a set of stored iris codes in a database to determine whether the iris code is contained in the database and returns data associated with the matched iris. The method terminates at step 1110.
  • FIG. 12 depicts a computer system for implementing the iris processor 100 in accordance with exemplary embodiments of the present invention. The computer system 1200 includes a processor 1202, various support circuits 1205, and memory 1204. The computer system 1200 may include one or more microprocessors known in the art similar to processor 1202. The support circuits 1205 for the processor 1202 include conventional cache, power supplies, clock circuits, data registers, I/O interface 1207, and the like. The I/O interface 1207 may be directly coupled to the memory 1204 or coupled through the support circuits 1205. The I/O interface 1207 may also be configured for communication with input devices and/or output devices such as network devices, various storage devices, mouse, keyboard, display, video and audio sensors, visible and infrared cameras and the like.
  • For example, in embodiments where the computer system comprises a mobile device such as a cell phone or tablet, I/O interfaces 1207, such as an integrated camera (possibly used in association with an on-board standard and/or infrared illumination device) or a wireless communication device (e.g., CNFC, RFID, Bluetooth, Wi-Fi, Wimax, Satcom, etc.), may be directly coupled to the memory 1204 and/or coupled through the support circuits 1205. In some embodiments, one or more software applications or apps may be configured to access the camera, illuminator and/or wireless communication device to accomplish the embodiments disclosed herein. In these embodiments, the apps may receive data (e.g., iris data) via the I/O interface 1207 and transmit the data, possibly via the support circuits 1205 to the memory 1204 running the app (e.g., Iris Processor 100 or Iris Biometric Recognition Module 1514). The app may perform any of the algorithms disclosed herein, and transmit the results (possibly via memory 1204 and/or support circuits 1205) to the I/O interfaces 1207 (e.g. via wireless communication) to a server and/or an additional wireless communication device (e.g., an ATM or security-enabled device such as a safe), to authenticate a user of the device.
  • The memory 1204, or computer readable medium, stores non-transient processor-executable instructions and/or data that may be executed by and/or used by the processor 1202. These processor-executable instructions may comprise firmware, software, mobile apps, and the like, or some combination thereof. Modules having processor-executable instructions that are stored in the memory 1204 comprise an iris processor 1206. The iris processor 1206 further comprises a pre-processing module 1208, a coding module 1210 and a matching module 1212. The memory 1204 may further comprise a database 1214, though the database 1214 need not be in the same physical memory 1204 as the iris processor 1206. The database 1214 may be remotely accessed by the iris processor 1206 via a cloud service. Additionally, the iris processor 1206 may also have several components that may not be co-located on memory 1204. For example, in some embodiments, the pre-processing module 1208 is local to the computer system 1200 or mobile device, while the coding module 1210 and the matching module 1212 may be accessed as cloud services via a wired or wireless network. In other instances, only the matching module 1212 is accessed via a network. Communication between each module may be encrypted as the data travels over the network.
  • The computer system 1200 may be programmed with one or more operating systems 1220 (generally referred to as operating system (OS)), that may include OS/2, Java Virtual Machine, Linux, SOLARIS, UNIX, HPUX, AIX, WINDOWS, WINDOWS95, WINDOWS98, WINDOWS NT, AND WINDOWS2000, WINDOWS ME, WINDOWS XP, WINDOWS SERVER, WINDOWS 8, Mac OS X, IOS, ANDROID among other known platforms. At least a portion of the operating system may be disposed in the memory 1204.
  • The memory 1204 may include one or more of the following random access memory, read only memory, magneto-resistive read/write memory, optical read/write memory, cache memory, magnetic read/write memory, and the like, as well as signal-bearing media as described below.
  • The computer system 1200 may be a mobile device such as a cellular phone or tablet device, for example. The mobile device may contain a camera and have the iris processor 1206 stored on memory as an application. In some embodiments, the iris processor 1206 may be a part of the operating system 1220. In some instances, the iris processor 1206 may be an independent processor, or stored on a different chip than the processor 1202. For example, often mobile devices have camera processing modules and the iris processor 1206, or portions of the iris processor 1206, may reside on the camera processing module, where the imager in the camera is a CCD or CMOS imager. In some instances, the mobile device may be customized to include some sensors, the type of the camera imager, or the like.
  • An image sensor may include a camera or infrared sensor or illuminator that is able to project images or other objects in the vicinity of the device. It should be understood that image capture can be performed using a single image, multiple images, periodic imaging, continuous image capturing, image streaming, etc. Further, a device can include the ability to start and/or stop image capture, such as when receiving a command from a user, application, or other device. The computing device can include one or more communication elements or networking sub-systems, such as a Wi-Fi, Bluetooth, radio frequency (RF), wired, or wireless communication system. The device in many embodiments can communicate with a network, such as the Internet, and may be able to communicate with other such devices. In some embodiments the device can include at least one additional input element able to receive conventional input from a user. This conventional input can include, for example, a push button, touch pad, touchscreen, wheel, joystick, keyboard, mouse, keypad, or any other such component or element whereby a user can input a command to the device. In some embodiments, however, such a device might not include any buttons at all, and might be controlled only through a combination of visual and audio commands, such that a user can control the device without having to be in contact with the device.
  • FIG. 13 illustrates the iris processor 100 in an exemplary operating scenario. In this case a combination of face tracking and a steerable/autofocus iris capture device comprising the iris processor 100 is used to identify multiple individuals within a particular location (whether stationary or moving). The capture device may be placed unobtrusively, e.g., at the side of the corridor, in or near an electronic advertisement display, or in any other public location, and can operate at a large range of capture distances yielding a range of presentation angles, if a device with the capabilities disclosed herein is used. By combining identity information derived from iris biometrics with tracking information from the person tracking system it is possible to associate an identity (or failure to identify an identity) with each person passing through or in the vicinity (stationary or moving) of the active capture region.
  • Referring now to FIGS. 14 and 15, an illustrative iris biometric recognition module 1514 is shown in greater detail. As shown in FIG. 14, when assembled, the iris biometric recognition module 1514 is a self-contained unitary module. As such, the iris biometric recognition module 1514 can be incorporated into, for example, a security or locking features, for example a door lock assembly, or any other type of device, apparatus, article, or system that can benefit from an application of iris biometric recognition technology, including, for example, a mobile device, a mobile device compatible with wireless communication, an electronic device used in a financial transaction and/or an electronic advertisement display device.
  • In embodiments that are not integrated into or attached and coupled to the structure, memory and circuitry of a mobile device, the iris biometric recognition module 1514 includes a support base 1610, to which an iris biometric recognition controller 1724 is mounted. A number of support posts, e.g., posts 1612, 1613, 1614, 1616, are coupled to the support base 1610 (by, e.g., a corresponding number of screws or other fasteners 1730, 1731, 1732, 1733) (1733 not shown). The support posts 1612, 1613, 1614, 1616 are connected to and support a pivot mount base 1618.
  • Coupled to and supported by the pivot mount base 1618 are an iris imager assembly 1626 and a face imager assembly 1628. In some embodiments, the iris imager assembly 1626 and the face imager assembly 1628 are the same device or utilize one or more of the same components (e.g., the same imaging device). However, in the illustrative embodiment, the iris imager assembly 1626 and the face imager assembly 1628 are separate assemblies utilizing different components. As described in more detail below, the face imager assembly 1628 captures digital images of a human subject, and more particularly, images of the subject's face and eyes, using a face imager 1648 that is equipped with a wide field of view lens. The iris imager assembly 1626 captures digital images of an iris of an eye of the human subject using an iris imager 1644 that is equipped with a narrow field of view lens. In some embodiments, both the face imager 1648 and the iris imager 1644 utilize the same type of imager (e.g., a digital camera, such as the Omnivision model no. OV02643-A42A), equipped with different lenses. For example, the face imager 1648 may be equipped with a wide field of view lens such as the Senview model no. TN01920B and the iris imager 1644 may be equipped with a narrow field of view lens such as model no. JHV-8M-85 by JA HWA Electronics Co. In other embodiments, a single high resolution imager (e.g., a 16+ megapixel digital camera) may be used with a wide field of view lens (rather than a combination of two cameras with different lenses) to perform the functionality of the iris imager 1644 and the face imager 1648.
  • The illustrative iris imager assembly 1626 is pivotably coupled to the pivot mount base 1618 by an axle 1622. The axle 1622 is e.g. removably disposed within a pivot groove 1620. The pivot groove 1620 is defined in the pivot mount base 1618. The components of the iris imager assembly 1626 are mounted to an iris pivot mount base 1630. The iris pivot mount base 1630 is coupled to the axle 1622 and to a support tab 1734. The support tab 1734 is coupled to a lever arm 1726 by a pivot link 1728. The lever arm 1726 is coupled to a control arm 1722. The control arm 1722 is driven by rotation of an output shaft of a motor 1720. The motor 1720 may be embodied as, for example, a servo motor such as a magnetic induction brushless servo motor (e.g., the LTAIR model no. D03013). Operation of the motor 1720 rotates the control arm 1722, which causes linear motion of the lever arm 1726, resulting in linear motion of the tab 1734. The linear motion of the tab 1734 rotates the axle 1622 in the pivot groove 1620. Depending on the direction of rotation of the output shaft of the motor 1720, the resulting rotation of the axle 1622 in the pivot groove 1620 causes the iris pivot mount base 1630 to tilt in one direction or the other, with respect to the pivot mount base 1618. For example, clockwise rotation of the motor output shaft may result in the iris pivot mount base 1630 tilting in an upwardly direction toward the face imaging assembly 1628 and vice versa. This pivoting capability of the iris pivot mount base 1630 enables the position of the iris imaging assembly 1626 to be mechanically adjusted to accommodate potentially widely varying heights of human subjects (e.g., the human subject 1424), ranging from small children to tall adults. In other embodiments, however, the iris imager assembly 1626 is stationary with respect to the pivot mount base 1618 and the ability to detect the irises of human subjects of widely varying heights is provided by other means, e.g., by software or by the use of a column of vertically-arranged iris imagers 1644 coupled to the mount base 1618.
  • The components of the iris imaging assembly 1626 include the iris imager 1644, a filter 1646 disposed on or covering the iris imager 1644, a pair of iris illuminator assemblies 1710, 1712 each adjacent to, e.g., disposed on opposite sides of, the iris imager 1644, and a pair of baffles or light guides 1636, 1638 disposed between the each of the iris illuminator assemblies 1710, 1712, respectively, and the iris imager 1644. Each of the illustrative iris illuminator assemblies 1710, 1712 includes one or more infrared light sources, e.g., infrared light emitting diodes (LEDs). In the illustrative embodiment, each iris illuminator assembly 1710, 1712 includes a number “N” of illuminators 1711, where N is a positive integer. While N=4 for both of the iris illuminator assemblies 1710, 1712 in the illustrative embodiment, the number N may be different for each assembly 1710, 1712 if required or desirable for a particular design of the iris biometric recognition module 1514. Each set of N illuminators is bounded by an additional light guide or shield 1714, 1716. Diffusers 1632, 1634 cover the iris illuminator assemblies 1710, 1712, respectively. For example, the diffusers 1632, 1634 may be coupled to the shields 1714, 1716 respectively (e.g., by an adhesive material). In the illustrative embodiments, the diffusers 1632, 1634 correct for the inherent non-uniformity of the light emitted by the illuminators 1711 (e.g., uneven lighting). This non-uniformity may be due to, for example, manufacturing irregularities in the illuminators 1711. As such, the diffusers 1632, 1634 may not be required in embodiments in which higher quality illuminators (or different types of illuminators) 1711 are used.
  • Although not specifically required for purposes of this disclosure, the illustrative iris imaging assembly 1626 further includes a pair of visual cue illuminators 1640, 1642, which are embodied as emitters of light having a wavelength in the visible light spectrum (e.g., colored light LEDs). The baffles 1636, 1638 and the shields 1714, 1716 are configured to prevent stray light emitted by the illuminator assemblies 1710, 1712 (and, for that matter, the visual cue LEDs 1640, 1642) from interfering with the operation of the iris imager 1644. That is, the baffles 1636, 1638 and the shields 1714, 1716 help ensure that when infrared light is emitted by the illuminator assemblies 1710, 1712, only the emitted light that is reflected by the eyes of the human subject (e.g., human subject 1424) is captured by the iris imager 1644. Additionally, a filter 1646 covers the lens of the iris imager 1644. The filter 1646 further blocks any extraneous light from entering the lens of the iris imager 1644. The filter 1646 may be embodied as, for example, an 840 nm narrowband filter and may be embedded in the lens assembly of the iris imager 1644. In other embodiments, other types of filters may be used, depending on the type of illuminators selected for the illuminator assemblies 1710, 1712. In other words, the selection of the filter 1646 may depend on the type or configuration of the illuminator assemblies 1710, 1722, in some embodiments.
  • The illustrative face imager assembly 1628 includes a face imager mount base 1631. The illustrative face imager mount base 1631 is non-pivotably coupled to the pivot mount base 1618. In other embodiments, however, the face imager mount base 1631 may be pivotably coupled to the pivot mount base 1618 (e.g., the face imager assembly 1628 and the iris imager assembly 1626 may both be mounted to the pivot mount 1630), as may be desired or required by a particular design of the iris biometric recognition module 1514. The face imager assembly 1628 includes the face imager 1648 and a face illuminator assembly 1650 located adjacent the face imager assembly 1628. The face imager assembly 1628 and the iris imager assembly 1626 are illustratively arranged so that the face imager assembly 1628 is vertically above the iris imager assembly 1626 when the iris biometric recognition module 1514 is mounted to a vertical structure (such as the door 1416). In other words, the face imager assembly 1628 and the iris imager assembly 1626 are arranged so that the face imager assembly 1628 is positioned adjacent to a first edge of the pivot mount base 1618 and the iris imager assembly 1626 is positioned adjacent to another edge of the pivot mount base 1618 that is opposite the first edge.
  • The face imager 1648 is secured to the face imager mount base 1631 by a bracket 1633. The face illuminator assembly 1650 includes one or more infrared light sources 1649 (e.g., infrared LEDs) mounted to a concavely shaped illuminator mount base 1740. In the illustrative embodiment, the face illuminator assembly 1650 includes a number “N” of illuminators 1649, where N is a positive integer (e.g., N=4). The configuration of the mount base 1740 enables the illuminators 1649 to be arranged at an angle to one another, in order to illuminate the desired portion of the capture zone (e.g., the range of vertical heights H1 of the eye levels of the anticipated population of human subjects 1424). The illuminators 1649 of the face illuminator assembly 1650 and the illuminators 1711 of the iris illuminator assemblies 1710, 1712 may each be embodied as a high power 840 nm infrared emitter (e.g., model no. OV02643-A42A available from OSRAM Opto Semiconductors).
  • The illustrative iris biometric recognition controller 1724 is embodied as an integrated circuit board including a microprocessor (e.g., model no. MCIMX655EVM10AC available from Freescale Semiconductor). The iris biometric recognition controller 1724 is configured to control and coordinate the operation of the face illuminator assembly 1650, the face imager 1648, the iris illuminator assemblies 1710, 1712, and the iris imager 1644, alone or in combination with other components of the iris biometric recognition module 1514.
  • While the iris biometric recognition module 1514 appears large in structure, it is possible to scale down the size of the iris biometric recognition module 1514 to fit within smaller items or devices, for example, a mobile device (e.g., a mobile telephone, a tablet, or any other portable device). The iris biometric recognition module 1514 may be implemented, for example, as part of a forward and/or rearward facing camera in a mobile device. Optionally, the iris biometric recognition module 1514 may be implemented within a mobile device in any other suitable manner.
  • One or more cameras or other image sensors within the mobile device may capture image or video content to be utilized by the iris biometric recognition module 1514. The one or more cameras may include, or be based at least in part upon any appropriate technology, such as a CCD or CMOS image sensor having a sufficient resolution, focal range, and/or viewable area, to capture an image of the user when the user is operating the device.
  • Referring now to FIG. 16, an embodiment 1800 of an iris biometric-enabled access control system is shown. The iris biometric-enabled access control system 1800 is shown in the context of an environment 1810 that may be created during the operation of the iris biometric recognition module 1514 (e.g., a physical and/or virtual execution or “runtime” environment). As shown in the environment 1810, in addition to the hardware components described above, the iris biometric recognition module 1514 includes a number of computer program components 1818, each of which is embodied as machine-readable instructions, modules, data structures and/or other components, and may be implemented as computer hardware, firmware, software, mobile app, or a combination thereof, in memory of the controller board 1724, for example.
  • The iris biometric recognition module computer program components 1818 include an iris image capture module 1820. The illustrative iris image capture module 1820 includes a face finder module 1822, an iris finder module 1824, a face/iris imager control module 1826, and a face/iris illuminator control module 1828. In operation, the face/iris imager control module 1826 controls a face/iris imager 1812 (e.g., the face imager 1648 and/or the iris imager 1644) by transmitting face imager control signals 1842 to the face/iris imager 1812 to capture digital images of a human subject 1804 entering or located in a tracking and capture zone 1802. In some embodiments, the iris biometric recognition module 1514 may be equipped with a motion sensor that can detect the human subject 1804 in the tracking and capture zone 1802. In those embodiments, the face/iris imager control module 1826 may initiate operation of the face/iris imager(s) 1812 in response to a motion detection signal received from the motion sensor. In other embodiments, the presence of a human subject 1804 can be detected using an image processing routine that recognizes a face in the field of view of the face/iris imager 1812. As noted above, the iris biometric recognition module 1514 can utilize iris images captured from moving subjects and/or subjects that are at a distance that is greater than, e.g., 45 cm away from the iris imaging device.
  • The illustrative face finder module 1822 executes a face recognition algorithm (e.g., FaceRecognizer in OpenCV), to determine whether an image captured by the face/iris imager 1812 (e.g., by a wide field of view camera) includes a human face. If the face finder module 1822 detects a human face, the face finder module 1822 returns the face location 1848, e.g., bounding box coordinates of the detected face within the captured image. In response to the face detection, the face/iris imager control module 1826 configures the face/iris imager 1812 to capture an image of an iris of the detected face. To do this, the illustrative face/iris imager control module 1826 may compute the tilt angle by which to tilt the iris imager assembly 1626 based on the bounding box coordinates of the detected face. This can be done by approximating the linear distance from the face/iris imager 1812 to the detected face, if the location and the field of view of the face/iris imager 1812 are known. For example, the proper tilt angle for the face/iris imager 1812 can be derived from the geometry of the triangle formed by connecting the location of the face/iris imager 1812 to the top and bottom edges of the bounding box of the detected face.
  • Once the tilt angle for the face/iris imager 1812 is determined, the face/iris imager control module 1826 operates the motor 1720 to achieve the computed tilt angle of the face/iris imager 1812. Once the face/iris imager 1812 is properly positioned with respect to the detected face, the iris finder module 1824 locates an eye and then the iris of the eye, on the human face, by executing eye and iris detection algorithms (e.g., the algorithms mentioned above with reference to FIGS. 1-13). In response to receiving iris location information 1850 from the iris finder module 1824, the face/iris imager control module 1826 initiates the process of capturing images of the iris by transmitting iris imager control signals 1842 to the face/iris imager 1812. These iris detection and image capture processes can be performed, for example, using the techniques described above with reference to FIGS. 1-13.
  • In capturing images of the face and iris of the detected human subject, the iris image capture module 1820 interfaces with a face/iris illuminator control module 1828 to coordinate, e.g., synchronize 1852, the operation of the face/iris imager 1812 and face/iris illuminators 1818. During the face image capture process, the control modules 1826, 1828 synchronize the operation of the face illuminator assembly 1650 with the capturing of face images by the face imager 1648. This helps ensure consistent face image quality irrespective of the available ambient lighting conditions. In other words, the coordination of the face image capture and the operation of the face illuminator assembly 1650 is analogous to traditional flash photography, albeit using infrared light rather than visible light. Additionally, during the process of capturing the iris images, the control modules 1826, 1828 synchronize the operation of the iris illuminators 1816 (e.g., iris illuminator assemblies 1710, 1712) with the capturing of iris images by the iris imager 1644. To accommodate the possibility that the subject 1804 may be moving, the iris imager control module 1826 operates the iris imager 1644 using a focal sweep technique in which several (e.g., 10-15 or more) images of the iris are captured in rapid succession (e.g., at a shutter speed in the range of about 5 frames per second). Synchronously, the iris illuminator control module 1828 pulses/strobes the iris illuminators 1710, 1712 at the same rate/frequency. This helps ensure that at least one good quality iris image is obtained irrespective of the available ambient lighting conditions and regardless of whether the subject is moving or whether the view of the iris is obstructed or distorted. In other words, the coordination of the iris image capture and the operation of the iris illuminators 1710, 1712 is analogous to traditional “red eye reduction” flash photography, except that the images of the iris are taken at the same time as the pulsing/strobing of the iris illuminators 1710, 1712 rather than after the pulsing/strobing is completed (and also, using infrared illuminators rather than visible light).
  • The iris image capture module 1820 outputs or otherwise makes available the resulting iris images 1854 to an iris image processing and matching module 1830. The iris image processing and matching module 1830 processes the images by, e.g., removing portions of the image that depict eyelids and eyelashes and adjusting for enlarged pupils, and producing the “iris code” in, for example, the manner described above with reference to FIGS. 1-13. The iris image processing and matching module 1830 compares the processed iris images 1854 or usable portions thereof, or the iris code, to reference image data 1836, to determine whether any of the captured iris images 1854 match an image stored in the reference images 1836. The reference image data 1836 includes iris image samples and/or related data that has been obtained previously, e.g., through an enrollment procedure. If the iris images 1854 are not found to match any of the images in the reference images data 1836, the iris image processing and matching module 1830 may initiate an enrollment procedure. That is, the iris biometric recognition module 1514 can be configured to perform iris image enrollment directly at the device, if required or desired for a particular implementation. To do this, the iris image processing and matching module 1830 passes the collected iris image(s) 1862 to an iris image enrollment module 1834. To complete the enrollment process, the illustrative iris image enrollment module 1834 may execute an image quality analysis on one or more of the reference image candidates 1862. An iris image may be added to the reference images data 1836 if the image quality analysis indicates that the image is suitable for use as a reference image. In performing the image quality analysis, the iris image enrollment module 1834 may analyze a number of different image quality factors, such as: the amount of the iris that is exposed in the image (e.g., the person is not squinting or blinking), the sharpness of the image, and the number of artifacts in the image (e.g., the number of eyelashes, specularities, etc.).
  • As a result of the iris image processing and matching performed by the module 1830, the iris biometric recognition module 1514 outputs or otherwise makes available an iris match determination 1856. The iris match determination 1856 may be embodied as a simple “positive” or “negative” indication, or may include other information (such as person-identifying information connected with the matched iris image), alternatively or in addition. In the illustrative access control system 1800, an access control module 1832 executes business logic encoded as, e.g., computer program logic, to determine how or even whether the access control system 1800 should respond to the iris match determination data 1856.
  • Referring now to FIG. 17, an example of a method 1900 executable by one or more components of the iris biometric recognition module 1514. The method 1900 may be embodied as computerized programs, routines, logic and/or instructions, which may be embodied in hardware, software, mobile apps, firmware, or a combination thereof, of the iris biometric recognition module 1514 and/or one or more other systems or devices in communication with the iris biometric recognition module 1514. In block 1910, the module 1514 detects a human subject approaching the iris biometric recognition module 1514. To do this, the module 1514 may analyze signals received from a wide field of view camera (e.g., the face imager 1648) or may analyze signals received from a motion sensor monitoring a capture zone of the iris biometric recognition module 1514. In block 1912, the module 1514 locates the face and eyes of the approaching subject in relation to a ground plane and in relation to the iris biometric recognition module 1514. To do this, the module 1914 may, in block 1914, control the face illuminators 1649 to illuminate (with infrared light) the area in which the human subject, or more particularly, the subject's face, is detected.
  • Once the subject's face is located, in block 1916, the module 1514 configures the iris imager 1644 to collect images of an iris of an eye of the approaching subject. As noted above, configuring the iris imager may involve operating a motor to tilt a platform to which the iris imager is mounted. Alternatively, the configuring may be performed e.g. by software controlling the lens focus and/or field of view of the iris imager. In any event, the procedure of block 1916 aligns the iris imager with the eye (or more particularly the iris) of the approaching subject.
  • In some embodiments, in block 1918, the module 1514 activates the visual cue illuminators 1640, 1642, to try to draw the subject's attention or visual focus toward the iris biometric recognition module 1514. The visual cue illuminators 1640, 1642 are typically activated after the subject's face is detected and the iris imager is configured (e.g., mechanically positioned), in order to draw the subject's eyes in-line with the iris imager camera.
  • In embodiments where the iris biometric recognition module 1514 is incorporated into an electronic advertisement display device, the iris biometric recognition module 1514 and any associated hardware or software may be configured to identify actions by the person, such as viewing an advertisement, when the advertisement was viewed, how long the user viewed the advertisement, or any other habits, actions, characteristics, or attributes of a subject.
  • Once the subject's face and eyes are detected, the iris biometric recognition module 1514 enters a loop 1920 in which the module 1514 coordinates the operation of the iris illuminator and the iris imager in rapid succession to obtain multiple images of the iris (e.g., frame rate of the iris imager and short-duration pulse frequency of the iris illuminator are coordinated/synchronized). More specifically, in block 1922, the module 1514 causes the iris illuminator assemblies to issue short pulses of high intensity infrared light. As discussed above with reference to FIGS. 1-13, in some embodiments of the module 1514, a light intensity of the illumination source (e.g., illuminators 1711) is increased during strobe to maintain a predetermined signal-to-noise (S/N) ratio, while an average irradiance of the illumination source over the course of the strobing remains below a safety threshold. At substantially the same time, the module 1514 causes the iris imager to capture a series of images of the pulse-illuminated iris (using, e.g., a “focal sweep” technique). That is, the iris image captures are timed to substantially coincide with the short, high intensity pulses of illumination, resulting in a “freeze” effect on the subject if the subject is in motion. In other embodiments, other alternatives to the focal sweep technique can be used, e.g.: auto focus on a target spot, if the subject is standing still for a length of time, or by using a fixed lens to provide a large fixed focus area.
  • In block 1926, the module 1514 determines whether to use any of the captured iris images are candidates to be used for enrollment purposes. If an iris image is a candidate to be used for enrollment, the module 1514 performs an iris image quality analysis on the image in block 1928, and updates the reference database of iris images if the quality analysis is successful.
  • In embodiments where the iris biometric recognition module 1514 is incorporated into an electronic advertisement display device, the iris biometric recognition module 1514 and any associated hardware or software may be configured to also update the database to include habits, patterns, interests, characteristics, and attributes, such as when a person was in a certain location, how often they are in that location for that amount of time, etc.
  • In blocks 1930, 1932, and 1934, the module 1514 performs iris image processing and matching in accordance with, for example, the techniques described above with reference to FIGS. 1-13. In block 1930, the module 1514 selects a subset of the captured iris images for matching purposes, based on image quality, size of the iris depicted in the image, and/or other factors. In block 1932, the module 1514 identifies a usable portion of the iris image(s) selected in block 1930 (using, e.g., the segmentation techniques described above). The “usable portion” of an iris image may correspond to the iris code, in some embodiments. In block 1934, the module 1514 compares the usable portion of the iris image identified in block 1932 to one or more reference images (e.g., the reference images 1836). In block 1936, the module 1514 determines whether the comparison performed in block 1934 results in an iris match.
  • While the flow diagram of FIG. 17 references an approaching subject, the steps discussed in relation to the flow diagram of FIG. 17 may be performed for a moving or a stationary subject. For example, the flow diagram in FIG. 17 may be utilized to authenticate a subject that is waiting for a bus or otherwise stationary for any reason or to authenticate a user that desires to access a mobile device, access wireless communication, make a financial transaction, etc. In any situation where the subject/user is stationary, the subject/user may remain stationary in order to draw the subject's eyes in line with the iris imager camera on the mobile device and the iris image may then be captured and matched as discussed in detail above.
  • An “iris match” as determined by the module 1514 may refer to, among other things, a numerical score that represents the probability that the captured iris image corresponds to the known iris image of a specific person. The “iris match” parameters are tunable, and can be set, for example, based on the accuracy requirements of a particular implementation of the module 1514 (e.g., how stringent is the test for acceptance of the subject as matching the identity of a known subject). As mentioned above with reference to FIGS. 1-13, the illustrative module 1514 computes a Hamming distance between an iris code representative of the captured iris image and the iris code representative of a reference iris image. In information theory, the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. Put another way, the Hamming distance measures the minimum number of substitutions required to change one string into the other, or the number of errors that transformed one string into the other. So, for example, if the module 1514 uses a Hamming distance of 0.35, that corresponds to a 1:133,000 false accept rate. Similarly, if the module 1514 is configured to use a Hamming distance of 0.28, the false accept rate is 1:10E11.
  • Example Usage Scenarios
  • Numerous applications of the disclosed technology exist that would benefit if the user and/or subject who is at a location, accessing an object, entering a premises, etc. could be accurately authenticated, verified, identified, or biometrically recorded at that instance of time for a variety of reasons. Many of these instances do not require one to know who the person is at that time. To date, this has not been possible due to the cumbersome nature of creating a biometric record and/or accurately matching to an existing template for the user or subject.
  • Today, documenting/recording the presence of an individual at a location at a moment in time is typically managed by the individual being identified by another person by sight, via a set of questions, and/or the person inspecting credentials such as a passport, driver license, employee badge, etc. (which must be validated) presented by the individual or recording a video or photograph of the individual at that location. None of these approaches is entirely accurate. The process of inspecting the credentials only validates the credentials presented. It does not validate that the person holding those credentials is actually the person described on the credentials. In addition, videos and photos can be easily manipulated to inaccurately record or misrepresent the presence of a person at a specific location.
  • The ability to record the presence of a user or subject by using an iris biometric collection device (which may be incorporated into another type of device, such as a fixed or mobile electronic device) that uses strobe illumination above the continuous wave eye safe limit would allow the documentation that the actual person was at that location, accessed an item, used a service, or obtained a benefit at the specific time. The use of the strobe illumination above the continuous wave eye safe limits allows collection of the biometric image in all lighting conditions (indoor, outdoor, bright sunlight, extreme darkness) and without requiring the subject or user to be stationary. Unlike existing biometric iris readers, the disclosed devices can be equipped with wired and/or wireless connectivity to maintain the most recent data on the device. Use of the iris as the enabling biometric allows identity to be determined without touching the subject as in a fingerprint and is less obtrusive than other biometric identification modalities. The implementations disclosed herein allow the collection of a high quality record with cooperative or uncooperative subjects including covert operations. Recording of the person's iris at a location at a specific time can be used verifiable proof that the specific person was at a particular location. The relevant location information can be captured as well (e.g., by a Global Positioning System or cellular location-based system), and stored along with the iris image and/or associated information. The biometric collection device described may be used alone or in conjunction with other collection and authentication techniques (e.g., PIN, pattern, different biometric) if multi-levels of authentication are desired.
  • Examples of events, activities or locations where the ability to document/record the presence or access of a person(s) to the location at a specific times are as follows: safes and safety deposit boxes; amusement parks; animal tagging and tracking (domestic, wild, aquatic, etc.); appliances (refrigerator, oven, gym equipment); assisted living facilities; automated teller machine; automated gate control; background checks; blood donors/red cross; brokerage account; casino; check cashing agencies; child day care facilities; commercial shipping facility; cruise ships; datacenter cabinets; detox centers; document screening activity; driver vehicle enrollment; drug testing collection location; entertainment facilities (club, theater, concert hall, skyboxes, stadiums, etc.); entitlement programs activities; ez pass authorization; fire drills; first responders securing an event; gun access; half-way houses; health club/gym/spa; hospitals; hotels/motels; insurance claim validations; large clinical studies; law enforcement activities; library; medical lab (quest/labcorp); mining operations; parole tracking; patient history; pay per usage; prisons; property storage locations; real-time monitoring of person using computer; refuge tracking; rehabilitation clinics; resorts; retail services; schools; shopper loyalty; ski lifts; sporting events; tax preparing and paying services; tele-medical services; tradeshow/conferences; validation of service personnel; vehicle management; voting and petitions; workforce management, and/or others.
  • FIG. 19 illustrates the iris biometric module 1514 of FIGS. 14 and 15 (or any other embodiment of the iris biometric module) implemented within a mobile device 2100 (e.g. a mobile telephone, a tablet, an electronic watch or bracelet, or any other mobile device). The mobile device 2100 may be any mobile or semi-mobile electronic device, for example, a personal computer, laptop computer, tablet computer, e-reader, smartphone, personal data assistant, set-top box, digital media player, microconsole, home automation system, or other computing device having a processor, central processing unit, microprocessor, or other suitable processor. The mobile device 2100 may include a display 2102, which may be a monitor, liquid crystal display screen, light-emitting diode (LED or organic LED (OLED)) screen, or other output-only video display. The display 2102 may function as an input device, such as a capacitive, resistive, or inductive touchscreen.
  • The iris biometric module 1514 may be configured as an add-on to the mobile device 2100 or may be incorporated into the mobile device 2100, for example, as part of a camera 2104 within the mobile device 2100. In some embodiments in which the iris biometric module 1514 is incorporated within the mobile device 2100, the iris imager assembly 1626 may replace or work in conjunction with the mobile device camera 2104.
  • The iris processor 100 described above with respect to FIGS. 1-13 may be implemented within the mobile device 2100 system in any suitable manner, for example, as discussed with respect to FIG. 12. In one embodiment, the iris processor 100 may be a separate processor within the mobile device memory or the mobile device 2100 may include a single processor that implements all mobile device functionality, including the functionality of the iris processor 100. The iris processor 100 may be used to identify and authorize a user (or users) of the mobile device 2100. By using identity information derived from iris biometrics, the iris processor 100 may determine whether the user is an authorized user of the mobile device 2100 and/or may determine what applications, settings, and/or other features of the mobile device 2100 may be accessed by the user. In at least one embodiment, a first user may have permission to use all or a first subset of applications, settings, and/or features (e.g., a parent that can access all applications, settings, and/or other features on the mobile device 2100) and a second user may have permission to use all or a second subset of applications, settings, and/or features (e.g., a child that may only be able to access child-friendly applications on the mobile device 2100). In some embodiments, the mobile device 2100 may be configured to require identification and authorization to access the device (i.e., as a login procedure) and/or may be configured to require identification and authorization to access one or more applications, settings, and/or features on the mobile device 2100.
  • The iris processor 100 may optionally be implemented within a display or marquee for displaying advertisements and/or a device co-located or associated with a display or marquee.
  • Implementation Examples
  • Referring now to FIG. 18, a simplified block diagram of an iris biometric recognition-enabled system 2000 is shown. While the illustrative embodiment 2000 is shown as involving multiple components and devices, it should be understood that the system 2000 may constitute a single device, alone or in combination with other devices. The system 2000 includes an iris biometric recognition module 2010, an iris biometric-controlled mechanism 2050, one or more other devices and/or systems 2062, and a server computing device 2070. Each or any of the devices/ systems 2010, 2050, 2062, 2070 may be in communication with one another via one or more electronic communication links 2048.
  • The system 2000 or portions thereof may be distributed across multiple computing devices as shown. In other embodiments, however, all components of the system 2000 may be located entirely on, for example, the iris biometric recognition module 2010 or one of the devices 2050, 2062, 2070. In some embodiments, portions of the system 2000 may be incorporated into other systems or computer applications. Such applications or systems may include, for example, commercial off the shelf (COTS) or custom-developed cameras, operating systems, authentication systems, or access control systems. As used herein, “application” or “computer application” may refer to, among other things, any type of computer program or group of computer programs, whether implemented in software, hardware, or a combination thereof, and includes self-contained, vertical, and/or shrink-wrapped software applications, distributed and cloud-based applications, and/or others. Portions of a computer application may be embodied as firmware, as one or more components of an operating system, a runtime library, an application programming interface (API), as a self-contained software application, or as a component of another software application, for example.
  • The illustrative iris biometric recognition module 2010 includes at least one processor 2012 (e.g. a microprocessor, microcontroller, digital signal processor, etc.), memory 2014, and an input/output (I/O) subsystem 2016. The module 2010 may be embodied as any type of electronic or electromechanical device capable of performing the functions described herein. Although not specifically shown, it should be understood that the I/O subsystem 2016 can include, among other things, an 1/O controller, a memory controller, and one or more I/O ports. The processor 2012 and the I/O subsystem 2016 are communicatively coupled to the memory 2014. The memory 2014 may be embodied as any type of suitable computer memory device, including fixed and/or removable memory devices (e.g., volatile memory such as a form of random access memory or a combination of random access memory and read-only memory, such as memory cards, e.g., SD cards, memory sticks, hard drives, and/or others).
  • The I/O subsystem 2016 is communicatively coupled to a number of hardware and/or software components, including computer program components 1818 such as those shown in FIG. 16 or portions thereof, illuminator(s) 2030 (e.g., face and iris illuminators 1816), an imaging subsystem 2032 (which may include separate face and iris imagers 2034, 2036), a motor 2038, and one or more motion and/or location sensors 2040. As used herein, an “imager” or “camera” may refer to any device that is capable of acquiring and recording two-dimensional (2D) or three-dimensional (3D) still or video images of portions of the real-world environment, and may include cameras with one or more fixed camera parameters and/or cameras having one or more variable parameters, fixed-location cameras (such as “stand-off” cameras that are installed in walls or ceilings), and/or mobile cameras (such as cameras that are integrated with consumer electronic devices, such as laptop computers, smart phones, tablet computers, wearable electronic devices and/or others.
  • The I/O subsystem 2016 is also communicatively coupled to one or more data storage devices 2020, a communication subsystem 2028, a user interface subsystem 2042, and a power supply 2044 (e.g., a battery). The user interface subsystem 2042 may include, for example, hardware or software buttons or actuators, a keypad, a display device, visual cue illuminators, and/or others. It should be understood that each of the foregoing components and/or systems may be integrated with the module 2010 or may be a separate component or system that is in communication with the I/O subsystem 2016 (e.g., over a network or a bus connection). In some embodiments, the UI subsystem 2042 includes a push button or similar mechanism for initiating the iris image enrollment process described above. In other embodiments, the iris image enrollment process takes place off the module 2010, e.g., on another device, such as a desktop computing device. Alternatively or in addition, iris image enrollment capabilities can be provided at a “central” module or server computer and then propagated to other modules 2010, e.g., via a communications network. For instance, in access control applications, enrollment may take place at a main entrance to a facility or security command center. Privileges can be determined at the central module or server and then pushed out to or “downloaded” by the individual door lock assemblies in the facility.
  • The data storage device 2020 may include one or more hard drives or other suitable data storage devices (e.g., flash memory, memory cards, memory sticks, and/or others). In some embodiments, portions of the system 2000 containing data or stored information, e.g., a database of reference images 1836, iris matching data/rules 2024 (e.g., access control logic or business logic for determining when an iris match has occurred and what to do when an iris match does or does not occur), iris imager configuration data/rules 2026 (e.g., mapping tables or functions for mapping iris imager tilt angles to motor control parameters), and/or other data, reside at least temporarily in the storage media 2020. Portions of the system 2000, e.g., the iris image database 2022, the iris matching data/rules 2024, the iris imager configuration data/rules 2026, and/or other data, may be copied to the memory 2014 during operation of the module 2010, for faster processing or other reasons.
  • The communication subsystem 2028 communicatively couples the module 2010 to one or more other devices, systems, or communication networks, e.g., a local area network, wide area network, personal cloud, enterprise cloud, public cloud, and/or the Internet, for example. Accordingly, the communication subsystem 2028 may include a databus, datalink, one or more wired or wireless network interface software, firmware, or hardware, for example, as may be needed pursuant to the specifications and/or design of the particular embodiment of the module 2010.
  • The iris biometric-controlled mechanism 2050, the other device(s)/system(s) 2062, and the server computing device 2070 each may be embodied as any suitable type of computing device, electronic device, or electromechanical device capable of performing the functions described herein, such as any of the aforementioned types of devices or other electronic devices. For example, in some embodiments, the server computing device 2070 may operate a “back end” portion of the iris biometric computer program components 1818, by storing the reference images 1836, iris matching data/rules 2024, and/or iris imager configuration data/rules 2026, in a data storage device 2080 or by performing other functions of the module 2010. In general, components of the server computing device 2070 having similar names to components of the module 2010 described above (e.g., processor 2072, memory 2074, I/O subsystem 2076) may be embodied analogously. The illustrative server computing device 2070 also includes a user interface subsystem 2082, a communication subsystem 2084, and an iris image enrollment system 2078 (which may capture and evaluate iris images for enrollment purposes, similar to the iris image enrollment module 1834 described above).
  • Further, each of the mechanisms/devices/systems 2050, 2062 may include components similar to those described above in connection with the module 2010 and/or the server computing device 2070, or another type of electronic device (such as a portable electronic device, embedded system (e.g., a vehicle infotainment system or smart appliance system). For example, the iris biometric-controlled mechanism 2050 includes one or more processors 2052, memory 2054, and an I/O subsystem 2056 (analogous to the processor 2012, memory 2014, and I/O subsystem 2016), an on-board power supply 2058 (e.g., a battery), and an access control module 1832 (e.g., to perform access control logic in response to an iris match determination made by the module 2010). The system 2000 may include other components, sub-components, and devices not illustrated in FIG. 18 for clarity of the description. In general, the components of the system 2000 are communicatively coupled as shown in FIG. 20 by one or more electronic communication links 2048, e.g., signal paths, which may be embodied as any type of wired or wireless signal paths capable of facilitating communication between the respective devices and components, including direct connections, public and/or private network connections (e.g., Ethernet, Internet, etc.), or a combination thereof, and including short range (e.g., Near Field Communication) and longer range (e.g., Wi-Fi or cellular) wireless communication links.
  • In some exemplary embodiments, an iris scanning system may be installed in a public location co-located with an advertisement. When an individual looks at the advertisement, iris images may be collected. These images may be compared to, for example, templates in a historical database. If no match for the iris is found, the hardware/software may collect and store data and other information regarding the iris, and there would be no change in the advertisement. However, if a match is found, the advertisement may be tailored to the individual, if possible, and additional subject data may be displayed. If it is not possible to tailor the ad to the individual, the ad will not be changed and additional subject data may be collected.
  • In the disclosed invention, a user may opt to enroll their iris image(s), and the user iris images may be enrolled and stored in database. Upon initialization of a financial transaction by a user, a software application may request permission to scan the iris of the user. If the user refuses permission to scan the iris of the user, the transaction may be terminated. However, if the user grants permission to scan the iris of the user, the software application may collect the images of the iris of the user and match the images against the database. If the image is rejected, the transaction may be terminated; however, if the image is accepted, the transaction may be completed
  • Additional Examples
  • Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below.
  • In an example 1, a biometric content display device includes a face imager device having a field of view, a camera, an illuminator device including one or more illuminators that emit light into the field of view, an electronic video display device to display a personalized electronic advertising content to a person within the field of view, the electronic advertising content comprising targeted marketing for the person, an input device, memory storing program instructions, and a processor communicatively coupled to the face imager, the camera, the illuminator device, the electronic video display, the input device, and the memory, and further communicatively coupled to a database storing a reference iris image, the processor executing the program instructions to: receive an input signal from the input device detecting the presence of a human face in a capture zone defined at least in part by the field of view, responsive to detection of the human face in the capture zone: align the camera with the iris of the person; send a first control signal to the illuminator device, the first control signal activating a synchronous emission of light by the one or more illuminators at a pulse frequency above continuous wave eye safe limits; send a second control signal to the camera, the second control signal activating a capture of a plurality of digital images by the camera, the capture being synchronous with the synchronous emission of light; receive the plurality of digital images from the camera; access the database to retrieve the reference iris image, determine a demographic characteristic or preference of the person, and find personalized electronic advertising content specific to the person, the electronic advertising content being associated with the demographic characteristic or preference of the person; determine that a first digital image of the plurality of digital images matches the reference iris image; and display the plurality of personalized electronic advertising content.
  • An example 2 includes the subject matter of claim 1, wherein the electronic display device comprises an electronic billboard or marquee.
  • An example 3 includes the subject matter of any of claim 1 or 2, wherein the processor executes the program instructions to determine that the first digital image of the plurality of digital images does not match the reference iris image, store, in the database, the plurality of digital images from the camera, and maintain a display of a current electronic advertisement on the electronic video display device.
  • An example 4 includes the subject matter of any of claims 1, 2, or 3, wherein the face imager device, the camera, the illuminator device, the input device, the memory, and the processor are incorporated into the electronic video display.
  • An example 5 includes the subject matter of any of claims 1, 2, 3, or 4, wherein the illuminator device comprises at least one infrared illuminator and the program instructions operate the illuminator device to illuminate the iris with a pulsing or strobe illumination operation above continuous wave eye safe limits.
  • An example 6 includes the subject matter of any of claims 1, 2, 3, 4, or 5, wherein the electronic video display device comprises a mobile device, the face imager device, the camera, the illuminator device, the input device, the memory, and the processor are incorporated into the mobile device, and the personalized electronic advertising content comprises a layout or advertisement displayed on the mobile device and specific to the person.
  • An example 7 includes the subject matter of any of claims 1, 2, 3, 4, 5, or 6, wherein the face imager device, the camera, the input device, the memory, and the processor are communicatively coupled, via wired or wireless internet connectivity, to a local, remote or cloud database and the local, remote or cloud database stores at least one record of: the determination that the first digital image of the plurality of digital images matches the reference iris image, a demographic characteristic or preference of the user, an action performed by the person, a timestamp for the action, an amount of time that the action was performed, and at least one habit, pattern or interest associated with the person.
  • An example 8 includes the subject matter of any of claims 1, 2, 3, 4, 5, 6, or 7, wherein the personalized electronic advertising content specific to the person on the electronic video display device is terminated after a predetermined time period.
  • In an example 9, an iris biometric content display system comprises an iris biometric recognition module that authenticates a person to display a plurality of electronic content specific to the person, the iris biometric recognition module comprising: a face imager device, an iris imager device comprising a lens, an illuminator device, an electronic video display device to display the plurality of electronic content, memory storing program instructions, and one or more processors communicatively coupled to the face imager device, the iris imager device, the illuminator device, the electronic video display device, and the memory, the one or more processors executing the program instructions to: detect the presence of a human face in a capture zone defined at least in part by a field of view of the face imager device, responsive to detection of the human face in the capture zone, align the lens of the iris imager device with an iris of the person, operate the illuminator device to illuminate the iris, operate the iris imager device to produce a digital image of the iris, compare the digital image to a reference iris image, and responsive to a determination that the digital image matches the reference iris image, query a database for data specific to the person, compare the data with the plurality of content, and in response to a determination that the data matches the plurality of content, display the plurality of content specific to the person on the electronic video display device.
  • An example 10 includes the subject matter of example 9, wherein the electronic video display comprises an electronic advertisement display, the iris imager device, the illuminator device, and the iris biometric recognition module are incorporated into the electronic advertisement display, and the plurality of content comprises an electronic advertisement specific to the person.
  • An example 11 includes the subject matter of any of examples 9 or 10, wherein the illuminator device comprises at least one infrared illuminator and the one or more processors executing the program instructions operates the illuminator device to illuminate the iris with a pulsing or strobe illumination operation above continuous wave eye safe limits.
  • An example 12 includes the subject matter of any of examples 9, 10, or 11, wherein the electronic video display device comprises a mobile device, the iris imager device, the illuminator device and the iris biometric recognition module are incorporated into a mobile device, and the plurality of content comprises a layout or advertisement displayed on the mobile device and specific to the person.
  • An example 13 includes the subject matter of any of examples 9, 10, 11, or 12, wherein the iris biometric recognition module is communicatively coupled, via wired or wireless internet connectivity, to a local, remote or cloud database and the local, remote or cloud database stores at least one record of: the comparison of the extracted portions of the selected images and the reference image, an action performed by the person, a timestamp for the action, an amount of time that the action was performed, and at least one habit, pattern, or interest associated with the person.
  • An example 14 includes the subject matter of any of examples 9, 10, 11, 12, or 13, wherein the one or more processors execute the program instructions to terminate the display of the plurality of content specific to the person on the electronic video display device after a predetermined time period.
  • In an example 15, a method comprising authenticating a person to display a plurality of electronic content specific to the person via an iris biometric recognition module comprises program instructions stored in memory and causing one or more processors to execute the steps of: detecting the presence of a human face in a capture zone defined at least in part by a field of view of a face imager device, responsive to detection of the human face in the capture zone, aligning a lens of an iris imager device with the iris of the person, operating an illuminator device to illuminate the iris, operating the iris imager device to produce a digital image of the iris, comparing the digital image to a reference iris image, and responsive to a determination that the digital image matches the reference iris image: querying a database for data specific to the person, comparing the data with the plurality of content, and in response to a determination that the data matches the plurality of content, displaying the plurality of content specific to the person on the electronic video display device.
  • An example 16 includes the subject matter of example 15, wherein authenticating the person to display a plurality of electronic content specific to the person via the iris biometric recognition module is executed via an electronic advertisement specific to the person displayed on an electronic advertising display incorporating the iris imager device, the illuminator device and the one or more processors.
  • An example 17 includes the subject matter of any of examples 15 or 16, wherein operating the illuminator device comprises operating an infrared illuminator and operating the infrared illuminator comprises illuminating the iris with a pulsing or strobe illumination operation above continuous wave eye safe limits.
  • An example 18 includes the subject matter of any of examples 15, 16, or 17, wherein authenticating the person to access the object via the iris biometric recognition module is executed via a layout or advertisement displayed on a mobile device incorporating the iris imager device, the illuminator device and the one or more processors.
  • An example 19 includes the subject matter of any of examples 15, 16, 17, or 18, and further comprises accessing via wired or wireless internet connectivity, from a local, remote, or cloud database: the comparison of the extracted portions of the selected images and the reference image, an action performed by the person, a timestamp for the action, an amount of time that action was performed, and at least one habit, pattern, or interest associated with the person.
  • An example 20 includes the subject matter of any of claims 15, 16, 17, 18, or 19, and further comprises terminating the display of the plurality of content specific to the person on the electronic video display device after a predetermined time period.
  • GENERAL CONSIDERATIONS
  • In the foregoing description, numerous specific details, examples, and scenarios are set forth in order to provide a more thorough understanding of the present disclosure. It will be appreciated, however, that embodiments of the disclosure may be practiced without such specific details. Further, such examples and scenarios are provided for illustration, and are not intended to limit the disclosure in any way. Those of ordinary skill in the art, with the included descriptions, should be able to implement appropriate functionality without undue experimentation.
  • References in the specification to “an embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is believed to be within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly indicated.
  • Embodiments in accordance with the disclosure may be implemented in hardware, firmware, software, or any combination thereof. Embodiments may also be implemented as instructions stored using one or more machine-readable media, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device or a “virtual machine” running on one or more computing devices). For example, a machine-readable medium may include any suitable form of volatile or non-volatile memory.
  • Modules, data structures, blocks, and the like are referred to as such for ease of discussion, and are not intended to imply that any specific implementation details are required. For example, any of the described modules and/or data structures may be combined or divided into sub-modules, sub-processes or other units of computer code or data as may be required by a particular design or implementation. In the drawings, specific arrangements or orderings of schematic elements may be shown for ease of description. However, the specific ordering or arrangement of such elements is not meant to imply that a particular order or sequence of processing, or separation of processes, is required in all embodiments. In general, schematic elements used to represent instruction blocks or modules may be implemented using any suitable form of machine-readable instruction, and each such instruction may be implemented using any suitable programming language, library, application-programming interface (API), and/or other software development tools or frameworks. Similarly, schematic elements used to represent data or information may be implemented using any suitable electronic arrangement or data structure. Further, some connections, relationships or associations between elements may be simplified or not shown in the drawings so as not to obscure the disclosure. This disclosure is to be considered as exemplary and not restrictive in character, and all changes and modifications that come within the spirit of the disclosure are desired to be protected.

Claims (20)

The claimed invention is:
1. A biometric content display device comprising:
a face imager device having a field of view;
a camera;
an illuminator device comprising one or more illuminators that emit light into the field of view;
an electronic video display device to display a personalized electronic advertising content to a person within the field of view, the electronic advertising content comprising targeted marketing for the person;
an input device;
memory storing program instructions; and
a processor communicatively coupled to the face imager, the camera, the illuminator device, the electronic video display, the input device, and the memory, and further communicatively coupled to a database storing a reference iris image, the processor executing the program instructions to:
receive an input signal from the input device detecting the presence of a human face in a capture zone defined at least in part by the field of view;
responsive to detection of the human face in the capture zone:
align the camera with the iris of the person;
send a first control signal to the illuminator device, the first control signal activating a synchronous emission of light by the one or more illuminators at a pulse frequency above continuous wave eye safe limits;
send a second control signal to the camera, the second control signal activating a capture of a plurality of digital images by the camera, the capture being synchronous with the synchronous emission of light;
receive the plurality of digital images from the camera;
access the database to:
retrieve the reference iris image;
determine a demographic characteristic or preference of the person; and
find personalized electronic advertising content specific to the person, the electronic advertising content being associated with the demographic characteristic or preference of the person;
determine that a first digital image of the plurality of digital images matches the reference iris image; and
display the plurality of personalized electronic advertising content.
2. The biometric content display device of claim 1, wherein the electronic display device comprises an electronic billboard or marquee.
3. The biometric content display device of claim 1, wherein the processor executing the program instructions to.
determine that the first digital image of the plurality of digital images does not match the reference iris image;
store, in the database, the plurality of digital images from the camera; and
maintain a display of a current electronic advertisement on the electronic video display device.
4. The biometric content display system of claim 1, wherein the face imager device, the camera, the illuminator device, the input device, the memory, and the processor are incorporated into the electronic video display.
5. The biometric content display system of claim 1, wherein:
the illuminator device comprises at least one infrared illuminator; and
the program instructions operate the illuminator device to illuminate the iris with a pulsing or strobe illumination operation above continuous wave eye safe limits.
6. The biometric content display system of claim 1, wherein:
the electronic video display device comprises a mobile device;
the face imager device, the camera, the illuminator device, the input device, the memory, and the processor are incorporated into the mobile device; and
the personalized electronic advertising content comprises a layout or advertisement displayed on the mobile device and specific to the person.
7. The biometric content display system of claim 1, wherein:
the face imager device, the camera, the input device, the memory, and the processor are communicatively coupled, via wired or wireless internet connectivity, to a local, remote or cloud database; and
the local, remote or cloud database stores at least one record of:
the determination that the first digital image of the plurality of digital images matches the reference iris image;
a demographic characteristic or preference of the user;
an action performed by the person;
a timestamp for the action;
an amount of time that the action was performed; and
at least one habit, pattern or interest associated with the person.
8. The biometric content display system of claim 1, wherein the personalized electronic advertising content specific to the person on the electronic video display device is terminated after a predetermined time period.
9. An iris biometric content display system comprising:
an iris biometric recognition module that authenticates a person to display a plurality of electronic content specific to the person, the iris biometric recognition module comprising:
a face imager device;
an iris imager device comprising a lens;
an illuminator device;
an electronic video display device to display the plurality of electronic content memory storing program instructions; and
one or more processors communicatively coupled to the face imager device, the iris imager device, the illuminator device, the electronic video display device, and the memory, the one or more processors executing the program instructions to:
detect the presence of a human face in a capture zone defined at least in part by a field of view of the face imager device;
responsive to detection of the human face in the capture zone, align the lens of the iris imager device with an iris of the person;
operate the illuminator device to illuminate the iris;
operate the iris imager device to produce a digital image of the iris;
compare the digital image to a reference iris image; and
responsive to a determination that the digital image matches the reference iris image:
query a database for data specific to the person;
compare the data with the plurality of content; and
in response to a determination that the data matches the plurality of content, display the plurality of content specific to the person on the electronic video display device.
10. The iris biometric content display system of claim 9, wherein:
the electronic video display comprises an electronic advertisement display;
the iris imager device, the illuminator device, and the iris biometric recognition module are incorporated into the electronic advertisement display; and
the plurality of content comprises an electronic advertisement specific to the person.
11. The iris biometric content display system of claim 9, wherein:
the illuminator device comprises at least one infrared illuminator; and
the one or more processors executing the program instructions operates the illuminator device to illuminate the iris with a pulsing or strobe illumination operation above continuous wave eye safe limits.
12. The iris biometric content display system of claim 9, wherein:
the electronic video display device comprises a mobile device;
the iris imager device, the illuminator device and the iris biometric recognition module are incorporated into a mobile device; and
the plurality of content comprises a layout or advertisement displayed on the mobile device and specific to the person.
13. The iris biometric content display system of claim 9, wherein:
the iris biometric recognition module is communicatively coupled, via wired or wireless internet connectivity, to a local, remote or cloud database; and
the local, remote or cloud database stores at least one record of:
the comparison of the extracted portions of the selected images and the reference image;
an action performed by the person;
a timestamp for the action;
an amount of time that the action was performed; and
at least one habit, pattern or interest associated with the person.
14. The iris biometric content display system of claim 9, wherein the one or more processors execute the program instructions to terminate the display of the plurality of content specific to the person on the electronic video display device after a predetermined time period.
15. A method comprising authenticating a person to display a plurality of electronic content specific to the person via an iris biometric recognition module comprising program instructions stored in memory and causing one or more processors to execute the steps of:
detecting the presence of a human face in a capture zone defined at least in part by a field of view of a face imager device;
responsive to detection of the human face in the capture zone, aligning a lens of an iris imager device with the iris of the person;
operating an illuminator device to illuminate the iris;
operating the iris imager device to produce a digital image of the iris;
comparing the digital image to a reference iris image; and
responsive to a determination that the digital image matches the reference iris image:
querying a database for data specific to the person;
comparing the data with the plurality of content; and
in response to a determination that the data matches the plurality of content, displaying the plurality of content specific to the person on the electronic video display device.
16. The method of claim 15, wherein authenticating the person to display a plurality of electronic content specific to the person via the iris biometric recognition module is executed via an electronic advertisement specific to the person displayed on an electronic advertising display incorporating the iris imager device, the illuminator device and the one or more processors.
17. The method of claim 15, wherein:
operating the illuminator device comprises operating an infrared illuminator; and
operating the infrared illuminator comprises illuminating the iris with a pulsing or strobe illumination operation above continuous wave eye safe limits.
18. The method of claim 15, wherein authenticating the person to access the object via the iris biometric recognition module is executed via a layout or advertisement displayed on a mobile device incorporating the iris imager device, the illuminator device and the one or more processors.
19. The method of claim 15, further comprising accessing via wired or wireless internet connectivity, from a local, remote or cloud database:
the comparison of the extracted portions of the selected images and the reference image;
an action performed by the person;
a timestamp for the action;
an amount of time that the action was performed; and
at least one habit, pattern or interest associated with the person.
20. The method of claim 15, further comprising terminating the display of the plurality of content specific to the person on the electronic video display device after a predetermined time period.
US16/036,023 2013-10-08 2018-07-16 Collecting and targeting marketing data and information based upon iris identification Abandoned US20180322343A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/036,023 US20180322343A1 (en) 2013-10-08 2018-07-16 Collecting and targeting marketing data and information based upon iris identification

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201361888130P 2013-10-08 2013-10-08
US201462054413P 2014-09-24 2014-09-24
US14/509,356 US9836647B2 (en) 2013-10-08 2014-10-08 Iris biometric recognition module and access control assembly
US14/509,366 US9836648B2 (en) 2013-10-08 2014-10-08 Iris biometric recognition module and access control assembly
US14/863,960 US10025982B2 (en) 2013-10-08 2015-09-24 Collecting and targeting marketing data and information based upon iris identification
US16/036,023 US20180322343A1 (en) 2013-10-08 2018-07-16 Collecting and targeting marketing data and information based upon iris identification

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/863,960 Continuation US10025982B2 (en) 2013-10-08 2015-09-24 Collecting and targeting marketing data and information based upon iris identification

Publications (1)

Publication Number Publication Date
US20180322343A1 true US20180322343A1 (en) 2018-11-08

Family

ID=55067813

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/863,960 Active US10025982B2 (en) 2013-10-08 2015-09-24 Collecting and targeting marketing data and information based upon iris identification
US16/036,023 Abandoned US20180322343A1 (en) 2013-10-08 2018-07-16 Collecting and targeting marketing data and information based upon iris identification

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/863,960 Active US10025982B2 (en) 2013-10-08 2015-09-24 Collecting and targeting marketing data and information based upon iris identification

Country Status (1)

Country Link
US (2) US10025982B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110648433A (en) * 2019-09-11 2020-01-03 通号通信信息集团有限公司 Real-name system verification gate and use method thereof
CN110675518A (en) * 2019-08-20 2020-01-10 吴益光 Intelligent attendance management and interaction method and system
WO2021107448A1 (en) * 2019-11-25 2021-06-03 주식회사 데이터마케팅코리아 Method and apparatus for providing knowledge graph-based marketing information analysis service to support efficient document classification processing
US20220044302A1 (en) * 2020-08-07 2022-02-10 International Business Machines Corporation Smart contact lenses based shopping

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB0117418D0 (en) * 2001-07-17 2001-09-12 Storm Mason R Litecam
US10025982B2 (en) 2013-10-08 2018-07-17 Princeton Identity, Inc. Collecting and targeting marketing data and information based upon iris identification
JP6557222B2 (en) 2013-10-08 2019-08-07 プリンストン アイデンティティー インク Iris biometric recognition module and access control assembly
US10038691B2 (en) 2013-10-08 2018-07-31 Princeton Identity, Inc. Authorization of a financial transaction
US10042994B2 (en) 2013-10-08 2018-08-07 Princeton Identity, Inc. Validation of the right to access an object
CN105093492B (en) * 2014-05-22 2018-06-26 宁波舜宇光电信息有限公司 A kind of camera optical microscope group and iris camera module
US10425814B2 (en) 2014-09-24 2019-09-24 Princeton Identity, Inc. Control of wireless communication device capability in a mobile device with a biometric key
JP2018506872A (en) 2014-12-03 2018-03-08 プリンストン・アイデンティティー・インコーポレーテッド System and method for mobile device biometric add-on
JP6380360B2 (en) * 2015-12-10 2018-08-29 コニカミノルタ株式会社 Image processing system, image output device, terminal device, image output method, and computer program
KR20180102637A (en) 2016-01-12 2018-09-17 프린스톤 아이덴티티, 인크. Systems and methods of biometric analysis
US10366296B2 (en) 2016-03-31 2019-07-30 Princeton Identity, Inc. Biometric enrollment systems and methods
WO2017172695A1 (en) 2016-03-31 2017-10-05 Princeton Identity, Inc. Systems and methods of biometric anaysis with adaptive trigger
KR102648770B1 (en) 2016-07-14 2024-03-15 매직 립, 인코포레이티드 Deep neural network for iris identification
CN106339698A (en) * 2016-09-30 2017-01-18 乐视控股(北京)有限公司 Iris recognition-based ticket purchase method and device
KR102610030B1 (en) 2016-11-15 2023-12-04 매직 립, 인코포레이티드 Deep learning system for cuboid detection
TWI672608B (en) * 2017-02-15 2019-09-21 瑞昱半導體股份有限公司 Iris image recognition device and method thereof
US10607096B2 (en) 2017-04-04 2020-03-31 Princeton Identity, Inc. Z-dimension user feedback biometric system
KR102573482B1 (en) 2017-07-26 2023-08-31 프린스톤 아이덴티티, 인크. Biometric security system and method
EP3685313A4 (en) 2017-09-20 2021-06-09 Magic Leap, Inc. Personalized neural network for eye tracking
WO2019084189A1 (en) 2017-10-26 2019-05-02 Magic Leap, Inc. Gradient normalization systems and methods for adaptive loss balancing in deep multitask networks
FR3076016B1 (en) * 2017-12-26 2021-10-22 Thales Sa ELECTRONIC INTERFACE DEVICE BETWEEN AT LEAST ONE AVIONICS SYSTEM AND A SET OF SENSORS, AVIONICS INSTALLATION, COMMUNICATION PROCESS AND ASSOCIATED COMPUTER PROGRAM
ES2662912B2 (en) * 2018-01-24 2019-09-13 Univ Madrid Complutense Method and apparatus for corneal biometric recognition
CN108520243B (en) * 2018-04-11 2019-02-26 上海凯斯特民防设备有限公司 Shield door foreign bodies detection alarm system
CN108416341B (en) * 2018-05-25 2023-11-21 重庆青腾致汇科技有限公司 Novel biological recognition system
US11216541B2 (en) * 2018-09-07 2022-01-04 Qualcomm Incorporated User adaptation for biometric authentication
US11227155B2 (en) 2019-01-23 2022-01-18 Alclear, Llc Remote biometric identification and lighting
US10986090B1 (en) 2019-05-20 2021-04-20 Rapid7, Inc. Security orchestration and automation using biometric data
WO2020261424A1 (en) * 2019-06-26 2020-12-30 日本電気株式会社 Iris recognition device, iris recognition method, computer program, and recording medium
US11494809B2 (en) * 2019-10-25 2022-11-08 Biobrand Llc System for target online advertising using biometric information
JPWO2021130871A1 (en) * 2019-12-24 2021-07-01
CN113033321A (en) * 2021-03-02 2021-06-25 深圳市安软科技股份有限公司 Training method of target pedestrian attribute identification model and pedestrian attribute identification method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040151347A1 (en) * 2002-07-19 2004-08-05 Helena Wisniewski Face recognition system and method therefor
US20060101508A1 (en) * 2004-06-09 2006-05-11 Taylor John M Identity verification system
US20110138444A1 (en) * 2009-12-04 2011-06-09 Lg Electronics Inc. Augmented remote controller and method for operating the same
US20110213665A1 (en) * 2010-02-26 2011-09-01 Bank Of America Corporation Bank Based Advertising System
US20120054028A1 (en) * 2010-08-31 2012-03-01 General Motors Llc Method of advertising to a targeted vehicle
US20130089240A1 (en) * 2011-10-07 2013-04-11 Aoptix Technologies, Inc. Handheld iris imager
US20150026708A1 (en) * 2012-12-14 2015-01-22 Biscotti Inc. Physical Presence and Advertising

Family Cites Families (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5291560A (en) 1991-07-15 1994-03-01 Iri Scan Incorporated Biometric personal identification system based on iris analysis
US6850252B1 (en) * 1999-10-05 2005-02-01 Steven M. Hoffberg Intelligent electronic appliance system and method
US6714665B1 (en) 1994-09-02 2004-03-30 Sarnoff Corporation Fully automated iris recognition system utilizing wide and narrow fields of view
US5987459A (en) * 1996-03-15 1999-11-16 Regents Of The University Of Minnesota Image and document management system for content-based retrieval
US6055322A (en) 1997-12-01 2000-04-25 Sensor, Inc. Method and apparatus for illuminating and imaging eyes through eyeglasses using multiple sources of illumination
US6421462B1 (en) 1998-02-06 2002-07-16 Compaq Computer Corporation Technique for differencing an image
US6973203B1 (en) 1999-07-22 2005-12-06 Swisscom Mobile Ag Transaction method and suitable device therefor
JP3586431B2 (en) * 2001-02-28 2004-11-10 松下電器産業株式会社 Personal authentication method and device
JP2002330318A (en) 2001-04-27 2002-11-15 Matsushita Electric Ind Co Ltd Mobile terminal
US7167987B2 (en) 2001-08-29 2007-01-23 Hewlett-Packard Development Company, L.P. Use of biometrics to provide physical and logic access to computer devices
US7715595B2 (en) 2002-01-16 2010-05-11 Iritech, Inc. System and method for iris identification using stereoscopic face recognition
US7280678B2 (en) 2003-02-28 2007-10-09 Avago Technologies General Ip Pte Ltd Apparatus and method for detecting pupils
US7380938B2 (en) 2003-03-25 2008-06-03 Sarnoff Corporation Apparatus to detect and measure saccade and pupilary changes
US7599524B2 (en) 2003-04-04 2009-10-06 Sarnoff Corporation Method and apparatus for providing a robust object finder
US7652685B2 (en) 2004-09-13 2010-01-26 Omnivision Cdm Optics, Inc. Iris image capture devices and associated systems
JP2005334402A (en) * 2004-05-28 2005-12-08 Sanyo Electric Co Ltd Method and device for authentication
US7466308B2 (en) 2004-06-28 2008-12-16 Microsoft Corporation Disposing identifying codes on a user's hand to provide input to an interactive display application
US20060184243A1 (en) 2004-10-22 2006-08-17 Omer Yilmaz System and method for aligning an optic with an axis of an eye
JP4702598B2 (en) 2005-03-15 2011-06-15 オムロン株式会社 Monitoring system, monitoring apparatus and method, recording medium, and program
US7542628B2 (en) 2005-04-11 2009-06-02 Sarnoff Corporation Method and apparatus for providing strobed image capture
US7634114B2 (en) 2006-09-01 2009-12-15 Sarnoff Corporation Method and apparatus for iris biometric systems for use in an entryway
US20060274918A1 (en) 2005-06-03 2006-12-07 Sarnoff Corporation Method and apparatus for designing iris biometric systems for use in minimally constrained settings
WO2007025258A2 (en) 2005-08-25 2007-03-01 Sarnoff Corporation Methods and systems for biometric identification
KR101308368B1 (en) * 2006-03-03 2013-09-16 허니웰 인터내셔널 인코포레이티드 An iris recognition system having image quality metrics
WO2008032329A2 (en) * 2006-09-13 2008-03-20 Alon Atsmon Providing content responsive to multimedia signals
US8121356B2 (en) 2006-09-15 2012-02-21 Identix Incorporated Long distance multimodal biometric system and method
US7574021B2 (en) 2006-09-18 2009-08-11 Sarnoff Corporation Iris recognition for a secure facility
US9002073B2 (en) 2007-09-01 2015-04-07 Eyelock, Inc. Mobile identity platform
EP2215579A4 (en) 2007-11-29 2013-01-30 Wavefront Biometric Technologies Pty Ltd Biometric authentication using the eye
CA2711143C (en) * 2007-12-31 2015-12-08 Ray Ganong Method, system, and computer program for identification and sharing of digital images with face signatures
US8930238B2 (en) * 2008-02-21 2015-01-06 International Business Machines Corporation Pervasive symbiotic advertising system and methods therefor
US9131141B2 (en) * 2008-05-12 2015-09-08 Sri International Image sensor with integrated region of interest calculation for iris capture, autofocus, and gain control
US20100082398A1 (en) * 2008-09-29 2010-04-01 Yahoo! Inc. System for providing contextually relevant data
US20100278394A1 (en) 2008-10-29 2010-11-04 Raguin Daniel H Apparatus for Iris Capture
US8615596B1 (en) 2009-01-14 2013-12-24 Sprint Communications Company L.P. Communication method and system for providing content to a communication device according to a user preference
US8387858B2 (en) * 2009-06-01 2013-03-05 Synderesis Technologies, Inc. Consumer rewards systems and methods
US10178290B2 (en) 2010-02-17 2019-01-08 Sri International Method and apparatus for automatically acquiring facial, ocular, and iris images from moving subjects at long-range
KR101046459B1 (en) 2010-05-13 2011-07-04 아이리텍 잉크 An iris recognition apparatus and a method using multiple iris templates
US8955001B2 (en) * 2011-07-06 2015-02-10 Symphony Advanced Media Mobile remote media control platform apparatuses and methods
US8639058B2 (en) 2011-04-28 2014-01-28 Sri International Method of generating a normalized digital image of an iris of an eye
US8854446B2 (en) 2011-04-28 2014-10-07 Iristrac, Llc Method of capturing image data for iris code based identification of vertebrates
US8682073B2 (en) 2011-04-28 2014-03-25 Sri International Method of pupil segmentation
US8755607B2 (en) 2011-04-28 2014-06-17 Sri International Method of normalizing a digital image of an iris of an eye
US8473748B2 (en) 2011-09-27 2013-06-25 George P. Sampas Mobile device-based authentication
US9241200B2 (en) * 2011-10-11 2016-01-19 Verizon Patent And Licensing Inc. Targeted advertising
GB2497553B (en) 2011-12-13 2018-05-16 Irisguard Inc Improvements relating to iris cameras
US8811630B2 (en) 2011-12-21 2014-08-19 Sonos, Inc. Systems, methods, and apparatus to filter audio
US9373023B2 (en) 2012-02-22 2016-06-21 Sri International Method and apparatus for robustly collecting facial, ocular, and iris images using a single sensor
US9100825B2 (en) * 2012-02-28 2015-08-04 Verizon Patent And Licensing Inc. Method and system for multi-factor biometric authentication based on different device capture modalities
US8977560B2 (en) * 2012-08-08 2015-03-10 Ebay Inc. Cross-browser, cross-machine recoverable user identifiers
EP2929487A4 (en) * 2012-12-10 2016-08-10 Stanford Res Inst Int Iris biometric matching system
US10038691B2 (en) 2013-10-08 2018-07-31 Princeton Identity, Inc. Authorization of a financial transaction
US10042994B2 (en) 2013-10-08 2018-08-07 Princeton Identity, Inc. Validation of the right to access an object
US10025982B2 (en) 2013-10-08 2018-07-17 Princeton Identity, Inc. Collecting and targeting marketing data and information based upon iris identification
JP6557222B2 (en) 2013-10-08 2019-08-07 プリンストン アイデンティティー インク Iris biometric recognition module and access control assembly

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040151347A1 (en) * 2002-07-19 2004-08-05 Helena Wisniewski Face recognition system and method therefor
US20060101508A1 (en) * 2004-06-09 2006-05-11 Taylor John M Identity verification system
US20110138444A1 (en) * 2009-12-04 2011-06-09 Lg Electronics Inc. Augmented remote controller and method for operating the same
US20110213665A1 (en) * 2010-02-26 2011-09-01 Bank Of America Corporation Bank Based Advertising System
US20120054028A1 (en) * 2010-08-31 2012-03-01 General Motors Llc Method of advertising to a targeted vehicle
US20130089240A1 (en) * 2011-10-07 2013-04-11 Aoptix Technologies, Inc. Handheld iris imager
US20150026708A1 (en) * 2012-12-14 2015-01-22 Biscotti Inc. Physical Presence and Advertising

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110675518A (en) * 2019-08-20 2020-01-10 吴益光 Intelligent attendance management and interaction method and system
CN110648433A (en) * 2019-09-11 2020-01-03 通号通信信息集团有限公司 Real-name system verification gate and use method thereof
WO2021107448A1 (en) * 2019-11-25 2021-06-03 주식회사 데이터마케팅코리아 Method and apparatus for providing knowledge graph-based marketing information analysis service to support efficient document classification processing
US20220044302A1 (en) * 2020-08-07 2022-02-10 International Business Machines Corporation Smart contact lenses based shopping
US11468496B2 (en) * 2020-08-07 2022-10-11 International Business Machines Corporation Smart contact lenses based shopping

Also Published As

Publication number Publication date
US10025982B2 (en) 2018-07-17
US20160012292A1 (en) 2016-01-14

Similar Documents

Publication Publication Date Title
US10025982B2 (en) Collecting and targeting marketing data and information based upon iris identification
US10038691B2 (en) Authorization of a financial transaction
US10042994B2 (en) Validation of the right to access an object
US10425814B2 (en) Control of wireless communication device capability in a mobile device with a biometric key
US9836648B2 (en) Iris biometric recognition module and access control assembly
JP2017502366A5 (en)
US9665772B2 (en) Iris biometric matching system
US20220165087A1 (en) Systems and methods for performing fingerprint based user authentication using imagery captured using mobile devices
WO2012111664A1 (en) Authentication device, authentication program, and authentication method
US20170316419A1 (en) Image analysis for live human detection
JP6792986B2 (en) Biometric device
JP5685272B2 (en) Authentication apparatus, authentication program, and authentication method

Legal Events

Date Code Title Description
AS Assignment

Owner name: SRI INTERNATIONAL, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PERNA, STEVEN N.;CLIFTON, MARK A.;KIM, JONGJIN;AND OTHERS;SIGNING DATES FROM 20150922 TO 20150924;REEL/FRAME:046359/0889

Owner name: PRINCETON IDENTITY, INC., NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SRI INTERNATIONAL;REEL/FRAME:046360/0094

Effective date: 20160729

Owner name: PRINCETON IDENTITY, INC., NEW JERSEY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GREEN, JOHN TIMOTHY;REEL/FRAME:046555/0651

Effective date: 20171027

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION