WO2022066817A1 - Sélection automatique de biométrie sur la base de la qualité d'une image acquise - Google Patents

Sélection automatique de biométrie sur la base de la qualité d'une image acquise Download PDF

Info

Publication number
WO2022066817A1
WO2022066817A1 PCT/US2021/051615 US2021051615W WO2022066817A1 WO 2022066817 A1 WO2022066817 A1 WO 2022066817A1 US 2021051615 W US2021051615 W US 2021051615W WO 2022066817 A1 WO2022066817 A1 WO 2022066817A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
user
biometric
images
camera
Prior art date
Application number
PCT/US2021/051615
Other languages
English (en)
Original Assignee
Sterling Labs Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sterling Labs Llc filed Critical Sterling Labs Llc
Priority to CN202180078422.9A priority Critical patent/CN116472564A/zh
Priority to US18/027,916 priority patent/US20230379564A1/en
Priority to EP21794287.9A priority patent/EP4217920A1/fr
Publication of WO2022066817A1 publication Critical patent/WO2022066817A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities

Definitions

  • An eye or gaze tracker is a device for estimating eye positions and eye movement. Eye tracking systems have been used in research on the visual system, in psychology, psycholinguistics, marketing, and as input devices for human-computer interaction. In the latter application, typically the intersection of a person’s point of gaze with a desktop monitor is considered.
  • Biometric authentication technology uses one or more features of a person to identify that person, for example for secure, authenticated access to devices, systems, or rooms.
  • one or more images are captured of the features being tracked (e.g., images of a person’s iris(es)), and the images are processed to generate a set or vector of metrics that are unique to, and thus uniquely identify, that person.
  • images of the person’s features are again captured and processed using a similar algorithm to the one used during registration.
  • the extracted metrics are compared to the baseline metrics and, if the match is sufficiently good, the person is allowed access.
  • Embodiments of imaging systems that implement flexible illumination methods are described. Embodiments may provide methods that improve the performance and robustness of an imaging system, and that make the imaging system adaptable to specific users, conditions, and setup for biometric authentication using the eyes and periorbital region, gaze tracking, and antispoofing. Embodiments of methods and apparatus for biometric authentication are described in which two or more biometric features or aspects are captured and analyzed individually or in combination to identify and authenticate a person.
  • an imaging system is used to capture images of a person’s iris, eye, periorbital region, and/or other regions of the person’s face, and two or more features from the captured images are analyzed individually or in combination to identify and authenticate the person (or to detect attempts to spoof the biometric authentication).
  • Embodiments may improve the performance of biometric authentication systems, and may help to reduce false positives and false negatives by the biometric authentication algorithms, when compared to conventional systems that rely on only one feature for biometric authentication.
  • Embodiments may be especially advantageous in imaging systems that have challenging hardware constrains (point of view, distortions, etc.) for individual biometric aspects or features (e.g., the iris) as additional biometric features (e.g., veins in the eye, portions or features of the periorbital region, or features of other parts of the face) may be used for biometric authentication if good images of one or more of the biometric features cannot be captured at a particular pose or under current conditions.
  • biometric aspects or features e.g., the iris
  • additional biometric features e.g., veins in the eye, portions or features of the periorbital region, or features of other parts of the face
  • the biometric aspects that are used may include one or more of facial, periocular, or eye aspects.
  • one or more different features may be used to describe or characterize the aspect; the different features may, for example, include geometric features, qualitative features, and low-level, intermediate, or high-level 3D representations.
  • the biometric aspects and features may include, but are not limited to, one or more of the eye surface, eye veins, eyelids, eyebrows, skin features, and nose features, as well as features of the iris such as color(s), pattern(s), and 3D musculature.
  • feature sizes and geometric relations to other features may be included as biometric aspects.
  • a similar method may be applied in a gaze tracking process in which two or more features of the eye are imaged and processed to obtain better information for gaze tracking at different poses and in different conditions.
  • FIGS. 1A through ID illustrate example eye camera systems, according to some embodiments.
  • FIG. 2 graphically illustrates tradeoffs between complexities in a biometric authentication system, according to some embodiments.
  • FIG. 3 is a block diagram of an imaging systems that implements a flexible illumination method, according to some embodiments.
  • FIG. 4 is a flowchart of a method for providing flexible illumination in an imaging system, according to some embodiments.
  • FIGS. 5A and 5B illustrate a biometric authentication system that combines different biometric aspects, according to some embodiments.
  • FIG. 6 is a flowchart of a method for performing biometric authentication using multiple biometric aspects, according to some embodiments.
  • FIG. 7 illustrates a biometric authentication system that uses multiple cameras, according to some embodiments.
  • FIG. 8 A is a flowchart of a method for biometric authentication using multiple cameras, according to some embodiments.
  • FIG. 8B is a flowchart of another method for biometric authentication using multiple cameras, according to some embodiments.
  • FIG. 9A illustrates a system that includes at least one additional optical element on the light path between the user’s eye and the eye camera, according to some embodiments.
  • FIG. 9B illustrates a system that includes a diffractive optical element on the light path between the user’s eye and the eye camera to improve the viewing angle of the camera, according to some embodiments.
  • FIG. 10 is a flowchart of a method for processing images in a system that includes at least one additional optical element on the light path between the user’s eye and the eye camera, according to some embodiments.
  • FIG. 11 is a flowchart of a method for capturing and processing images in a system that includes a diffractive optical element on the light path between the user’s eye and the eye camera to improve the viewing angle of the camera, according to some embodiments.
  • FIGS. 12A through 12C illustrate a system that includes light sources that emit light at multiple wavelengths to sequentially capture images at the multiple wavelengths, according to some embodiments.
  • FIGS. 13 A and 13B illustrate a system that includes a camera with a photosensor that concurrently captures multiple images at different wavelengths, according to some embodiments.
  • FIG. 14 is a flowchart of a method for sequentially capturing and processing images at multiple wavelengths, according to some embodiments.
  • FIG. 15 is a flowchart of a method for concurrently capturing and processing images at multiple wavelengths, according to some embodiments.
  • FIG. 16 illustrates a system that provides feedback to the user and/or control signals to the imaging system to manually or mechanically adjust the viewing angle of the camera with respect to the user’s eye or periocular region, according to some embodiments.
  • FIG. 17 is a flowchart of a method for providing feedback to the user to manually adjust the viewing angle of the camera with respect to the user’s eye or periocular region, according to some embodiments.
  • FIG. 18 is a flowchart of a method for providing control signals to the imaging system to mechanically adjust the viewing angle of the camera with respect to the user’s eye or periocular region, according to some embodiments.
  • FIGS. 19A and 19B are block diagrams illustrating a device that may include components and implement methods as illustrated in FIGS. 1 through 18, according to some embodiments.
  • FIG. 20 illustrates an example head-mounted device (HMD) that may include components and implement methods as illustrated in FIGS. 1 through 18, according to some embodiments.
  • HMD head-mounted device
  • FIG. 21 is a block diagram illustrating an example system that may include components and implement methods as illustrated in FIGS. 1 through 18, according to some embodiments.
  • a unit/circuit/component is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. ⁇ 112, paragraph (f), for that unit/circuit/component.
  • “configured to” can include generic structure (e.g., generic circuitry) that is manipulated by software or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue.
  • “Configure to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks.
  • a determination may be solely based on those factors or based, at least in part, on those factors.
  • a determination may be solely based on those factors or based, at least in part, on those factors.
  • An imaging system as described herein may include two or more illumination sources (e.g., point light sources such as light-emitting diodes (LEDs)) that illuminate an object to be imaged (e.g., a person’s eye or eye region), and at least one camera configured to capture images of light from the illumination sources reflected by the object when illuminated.
  • illumination sources e.g., point light sources such as light-emitting diodes (LEDs)
  • LEDs light-emitting diodes
  • Embodiments of the imaging system may, for example, be used for biometric authentication, for example using features of the user’s eyes such as the iris, the eye region (referred to as the periocular region), or other parts of the user’s face such as the eyebrows.
  • a biometric authentication system uses one or more of the features to identify a person, for example for secure, authenticated access to devices, systems, or rooms.
  • one or more images are captured of the features being tracked (e.g., images of a person’s iris(es), periocular region, etc.), and the images are processed to generate a set or vector of metrics that are unique to, and thus uniquely identify, that person.
  • images of the person’s features are again captured and processed using a similar algorithm to the one used during registration.
  • the extracted metrics are compared to the baseline metrics and, if the match is sufficiently good, the person may be allowed access.
  • a gaze tracking system may, for example, be used to compute gaze direction and a visual axis using glints and eye features based on a three-dimensional (3D) geometric model of the eye.
  • Embodiments of the imaging system described herein may, for example, be used in a biometric authentic process, a gaze tracking process, or both.
  • Another example is in anti-spoofing, which is related to biometric authentication in that “spoofing” refers to attempts to trick a biometric authentication system by, for example, presenting a picture or model of a valid user’s eye, eye region, or face.
  • embodiments of the imaging system may be implemented in any application or system in which images of an object illuminated by a light source are captured by one or more cameras for processing.
  • a non-limiting example application of the methods and apparatus for flexible illumination in imaging systems are in systems that include at least one eye camera (e.g., infrared (IR) cameras) positioned at each side of a user’s face, and an illumination source (e.g., point light sources such as an array or ring of IR light-emitting diodes (LEDs)) that emit light towards the user’s eyes.
  • the imaging system may, for example, be a component of a head-mounted device (HMD), for example a HMD of an extended reality (XR) system such as a mixed or augmented reality (MR) system or virtual reality (VR) system.
  • XR extended reality
  • MR mixed or augmented reality
  • VR virtual reality
  • the HMD may, for example be implemented as a pair of glasses, googles, or helmet.
  • imaging system examples include mobile devices such as smartphones, pad or tablet devices, desktop computers, and notebook computers, as well as stand-alone biometric authentication systems mounted on walls or otherwise located in rooms or on buildings.
  • the imaging system may be used for biometric authentication, gaze tracking, or both.
  • FIGS. 1A through ID illustrate example imaging systems, according to some embodiments.
  • the imaging system may include, but is not limited to, one or more cameras 140, an illumination source 130, and a controller 160.
  • FIG. 1A shows an imaging system in in which the eye camera 140 images the eye 192 directly. However, in some embodiments the eye camera 140 may instead image a reflection of the eye 192 off of a hot mirror 150 as shown in FIG. IB. In addition, in some embodiments, the eye camera 140 may image the eye through a lens 120 of an imaging system, for example as shown in FIG. 1C.
  • a device e.g., a head-mounted device (HMD)
  • HMD head-mounted device
  • an imaging system that includes at least one eye camera 140 (e.g., infrared (IR) cameras) positioned on one side or at each side of the user’s face, and an illumination source 130 (e.g., point light sources such as an array or ring of IR light-emitting diodes (LEDs)) that emits light towards the user’s eye(s) 192 or periorbital region.
  • eye camera 140 e.g., infrared (IR) cameras
  • an illumination source 130 e.g., point light sources such as an array or ring of IR light-emitting diodes (LEDs)
  • FIG. ID shows an example illumination source 130 that includes multiple LEDs 132.
  • the LEDs 132 may be configured to emit light in the IR (including SWIR or NIR) range, for example at 740, 750, 840, 850, 940, or 950 nanometers.
  • the eye camera 140 may be pointed towards the eye 192 to receive light from the illumination source 130 reflected from the eye 192, as shown in FIG. 1A.
  • the eye camera 140 may instead image a reflection of the eye 192 off of a hot mirror 150 as shown in FIG. IB.
  • the eye camera 140 may image the eye 192 through a lens 120 or other optical element of the device, for example as shown in FIG. 1C.
  • the device that includes the imaging system may include a controller 160 comprising one or more processors and memory.
  • Controller 160 may include one or more of various types of processors, image signal processors (ISPs), graphics processing units (GPUs), coder/decoders (codecs), and/or other components for processing and rendering video and/or images.
  • the controller 160 may be integrated in the device.
  • at least some of the functionality of the controller 160 may be implemented by an external device coupled to the device by a wired or wireless connection. While not shown in FIGS. 1A through 1C, in some embodiments controller 160 may be coupled to an external memory for storing and reading data and/or software.
  • the controller 160 may send control signals to the illumination source 130 and camera 140 to control the illumination of the eye 192 and capture of images of the eye 192.
  • the controller 160 may use input 142 (e.g., captured images of the eyes 192) from the eye cameras 140 for various purposes, for example for biometric authentication or gaze tracking.
  • the controller 160 may implement algorithms that estimate the user’s gaze direction based on the input 142.
  • the controller 160 may implement algorithms that process images captured by the cameras 140 to identify features of the eye 192 (e.g., the pupil, iris, and sclera) or periorbital region to be used in biometric authentication algorithms.
  • the controller 160 may implement gaze tracking algorithms that process images captured by the cameras 140 to identify glints (reflections of the LEDs 130) obtained from the eye cameras 140.
  • the information obtained from the input 142 may, for example, be used to determine the direction in which the user is currently looking (the gaze direction), and may be used to construct or adjust a 3D model of the eye 192.
  • components of the device may result in unwanted reflections and stray light on the final image captured by camera 140.
  • the imaging system becomes more complex, for example with optical surfaces (e.g., lenses 120 and/or mirrors 150) involved in the trajectory between the point light sources 130 and camera 140, the higher the likelihood of getting unwanted reflections and stray light on the final image captured by camera 140, for example caused by reflections in lenses, imperfections in lenses or optical surfaces, or dust on optical surfaces.
  • components of the device e.g., lenses
  • position of the device and imaging system with respect to the user’ s head may shift during use.
  • Other aspects of the device and imaging system may change. For example, a surface of a lens in the device may become smudged, or the user may add or change something such as clip-on lenses to the device.
  • quality of the images captured with the imaging system may vary depending on the current lighting conditions, position of the device and imaging system with respect to the user’s head, and other factors such as smudges or other changes to the device.
  • the quality of the captured images may affect the efficiency and accuracy of algorithms used in various applications including but not limited to biometric authentication, anti-spoofing, and gaze tracking.
  • Embodiments of the methods and apparatus for flexible illumination in imaging systems as described herein may improve the performance and robustness of an imaging system, and may help to adapt the imaging system to specific users, conditions, and setup for applications including but not limited to biometric authentication, anti-spoofing, and gaze tracking.
  • FIG. 2 graphically illustrates tradeoffs between complexities in a biometric authentication system, according to some embodiments.
  • Embodiments of an imaging system used for biometric authentication as described herein may trade off system complexity 210 for complexity in the enrollment 200 process.
  • a more complex system 210 may reduce the complexity of the enrollment process for the user, for example by automating processes such as shifting the camera to get a better view of the eye rather than having the user move the device manually.
  • the enrollment 200 process could be made more complex to reduce system complexity 210.
  • biometric authentication may be improved by increasing the number of aspects 220 of the user’s eyes and periorbital region that are used in the identification process at the expense of system complexity 210 and possibly enrollment complexity 200. Similar tradeoffs may apply in other applications such as gaze tracking.
  • Embodiments of imaging systems that implement a flexible illumination method are described. Embodiments may provide methods that improve the performance and robustness of an imaging system, and that make the imaging system adaptable to specific users, conditions, and setup for biometric authentication using the eyes and periorbital region, gaze tracking, and antispoofing. While, conventional eye tracking systems focus on specular reflections or glints for gaze tracking, embodiments may focus on other aspects such as providing uniform, good contrast on the iris or other regions of interest, reducing or illuminating shadows on regions of interest, and other improvements for biometric authentication applications. [0051] In embodiments, two or more different lighting configurations for the imaging system in a device are pre-generated.
  • Each lighting configuration may specify one or more aspects of lighting including, but not limited to, which LEDs or group of LEDs to enable or disable, intensity /brightness, wavelength, shapes and sizes of the lights, direction, sequences of lights, etc.
  • One or more lighting configurations may be generated for each of two or more poses, where a pose is a 3D geometrical relationship between the eye camera and the user’s current eye position and gaze direction.
  • a lookup table may be generated via which each pose is associated with its respective lighting configuration(s).
  • the lookup table and lighting configurations may, for example be stored to memory of the device and/or to memory accessible to the device via a wired or wireless connection
  • the lighting configurations may be pre-generated synthetically for a device and imaging system, for example using a 3D geometric model or representation of the device and imaging system to generate lighting configurations for a set of estimated poses.
  • the lighting configurations may be pre-generated using a data set of images of real-world user faces to obtain pose information.
  • the lighting configurations may be generated during an initialization process for a particular user. For example, in some embodiments, the user puts on or holds the device and moves their gaze around, and the system/controller runs through a process during which images are captured and processed with different light settings to determine optimal lighting configurations for this user when capturing images of the desired features at two or more different poses.
  • the user may put on, hold, or otherwise use the device.
  • a biometric authentication process may be initiated in which different lighting configurations may be selected by the controller to capture optimal images of the desired features of the user’s eye (e.g., iris, periorbital region, etc.) at different poses and in different conditions for use by the biometric authentication algorithms executed by the controller.
  • the device may initiate a biometric authentication process when the user accesses the device.
  • the device’s controller may begin the biometric authentication process with a default initial lighting configuration.
  • One or more images may be captured by the imaging system using the respective setting for the illumination source, and the captured image(s) may be checked for quality. If the images are satisfactory for the algorithms that process the images to perform biometric authentication using one or more features of the user’s eye, periorbital region, and/or other facial features, then the flexible illumination process may be done. Otherwise, the controller may select another lighting configuration, direct the illumination source to illuminate the subject according to the new lighting configuration, and direct the camera to capture one or more images that are checked for quality.
  • the user’s current pose may be determined by the imaging system and controller, for example using a gaze tracking algorithm, and the user’s current pose may be used to select an initial lighting configuration and, if necessary, one or more subsequent lighting configurations for the biometric authentication process.
  • a similar method may be applied in a gaze tracking process in which different lighting configurations are selected by the controller to obtain better images of the desired features of the user’s eyes (e.g., glints) at different poses and in different conditions.
  • different lighting configurations are selected by the controller to obtain better images of the desired features of the user’s eyes (e.g., glints) at different poses and in different conditions.
  • Embodiments of the flexible illumination method may improve the performance and robustness of an imaging system, and may help to adapt the imaging system to specific users, conditions, and setup for applications including but not limited to biometric authentication, anti- spoofing, and gaze tracking.
  • Embodiments may capture and process images of the eye or periorbital region using one or more different lighting configurations until a lighting configuration is found that provides optimal (or at least good enough) images to perform a particular function (e.g., biometric authentication, gaze tracking, etc.), thus improving the performance and robustness of the device, system, and/or algorithm that uses image(s) of the eye or periorbital region in performing the function (e.g., biometric authentication, gaze tracking, etc.).
  • embodiments of the flexible illumination method may help to make an imaging system adaptable to one or more of, but not limited to:
  • Embodiments of the flexible illumination method may, for example, be implemented in any of the illumination systems as illustrated in FIGS. 1 A through ID.
  • FIGS. 19A through 21 illustrate example devices and systems that may include imaging systems that implement embodiments of the flexible illumination method.
  • An illumination system that implements the flexible illumination may include, but is not limited to:
  • At least one eye camera e.g., an infrared (IR) or near-infrared (NIR) camera, an RGB or RGB-D camera, etc.
  • IR infrared
  • NIR near-infrared
  • RGB RGB-D camera
  • an illumination source that includes multiple light-emitting elements which can be controlled individually or in groups (e.g., IR or NIR LEDs, or LEDs in other wavelengths).
  • a controller of the device that includes the imaging system may control one or more of, but not limited to, the following based on a current lighting configuration:
  • the light-emitting elements, or groups of the light-emitting elements may differ in one or more of, but not limited to, the following:
  • individual light-emitting elements or groups of light-emitting elements may include additional optical elements, for example lenses, grids, etc., that affect light emitted by the elements or groups of light-emitting elements.
  • One or more images of a user’s eye or periorbital region may be captured using a first lighting configuration. Additional images may be captured using at least one additional lighting configuration.
  • One or more objective criteria e.g., contrast, shadows, edges, undesirable streaks, etc.
  • one of the lighting configurations that corresponds to one or more image(s) that best satisfies the objective criteria for this user may be selected.
  • a change in the conditions under which the lighting configuration was selected is detected (e.g., some change in the user’s position or appearance, a change in ambient lighting, a change to the device that includes the imaging system, etc.), then the method for selecting a lighting configuration may be repeated.
  • the objective criteria used in selecting lighting configurations may differ based on the particular application. For example, in a biometric authentication process that uses the iris to authenticate users, the algorithm may need images of the iris with uniform, good contrast, no shadows, etc. In a gaze tracking process, the algorithm may need images that include specular reflections or glints in certain locations and/or of certain sizes and number.
  • the objective criteria used in selecting lighting configurations may differ based on the environment (e.g., internal vs external ambient conditions). In some embodiments, the objective criteria used in selecting lighting configurations may differ based on varying gaze poses or adjustments to a user’ s face, for example eye relief (depth) and interpupillary distance (IPD).
  • the environment e.g., internal vs external ambient conditions.
  • the objective criteria used in selecting lighting configurations may differ based on varying gaze poses or adjustments to a user’ s face, for example eye relief (depth) and interpupillary distance (IPD).
  • FIG. 3 is a block diagram of an imaging systems that implements a flexible illumination method, according to some embodiments.
  • Two or more lighting configurations 372 may be generated in a configuration generation 310 process.
  • the lighting configurations may be pre-generated synthetically for a device and imaging system, for example using a 3D geometric model or representation of the device and imaging system to generate lighting configurations for a set of estimated poses.
  • the lighting configurations may be pre-generated using a data set of images of real-world user faces to obtain pose information.
  • the lighting configurations may be generated during an initialization process for a particular user.
  • the user puts on or holds the device and moves their gaze around, and the system/controller runs through a process during which images are captured and processed with different light settings to determine optimal lighting configurations for this user when capturing images of the desired features at two or more different poses.
  • the pre-generated lighting configurations 372 may be stored 320 to memory 370 accessible to controller 360.
  • a lookup table 374 may be generated and stored to memory 370 that, for example, maps particular poses to particular lighting configurations.
  • a user may put on, hold, or otherwise use a device that includes the controller 360, illumination source 330, and eye camera(s) 340.
  • a biometric authentication process may be initiated in which different lighting configurations 372 may be selected by the controller 360 to capture optimal images of the desired features of the user’s eye (e.g., iris, periorbital region, etc.) at different poses and in different conditions for use by the biometric authentication algorithms executed by the controller 360.
  • different lighting configurations 372 may be selected by the controller 360 to capture optimal images of the desired features of the user’s eye (e.g., iris, periorbital region, etc.) at different poses and in different conditions for use by the biometric authentication algorithms executed by the controller 360.
  • the device may initiate a biometric authentication process when the user accesses the device.
  • the device’s controller 360 may begin a biometric authentication process by directing 344 the illumination source 330 to use a default initial lighting configuration 372.
  • One or more images may be captured 342 by the eye camera(s) 340 using the respective lighting provided by the illumination source 330, and the captured image(s) may be checked for quality according to one or more objective criteria or measures as previously described. If the images are satisfactory for the biometric authentication algorithms that rely on one or more features of the user’s eye, periorbital region, and/or other facial features captured in the images, then the flexible illumination process may be done.
  • the controller 360 may select another lighting configuration 372, direct the illumination source 330 to illuminate the subject according to the new lighting configuration 372, and direct the camera to capture 342 one or more images with the new lighting configuration 372 that are checked for quality according to one or more objective criteria. This process may be repeated until a successful authentication has been achieved, or for a specified number of attempts until the authentication attempt is considered failed.
  • the user’s current pose may be determined by the imaging system and controller 360, for example using a gaze tracking algorithm, and the user’s current pose may be used to select an initial lighting configuration 372 and, if necessary, one or more subsequent lighting configurations 372 for the biometric authentication process.
  • a similar method may be applied in a gaze tracking process in which different lighting configurations 372 are selected by the controller 360 to obtain better images of the desired features of the user’s eyes (e.g., glints) at different poses and in different conditions using one or more objective criteria.
  • different lighting configurations 372 are selected by the controller 360 to obtain better images of the desired features of the user’s eyes (e.g., glints) at different poses and in different conditions using one or more objective criteria.
  • FIG. 4 is a flowchart of a method for providing flexible illumination in an imaging system, according to some embodiments.
  • two or more lighting configurations may be generated and stored to a memory.
  • a lookup table that maps poses to lighting configurations may also be generated and stored.
  • an initial lighting configuration may be selected.
  • one or more images may be captured with the current lighting configuration and analyzed according to one or more objective criterial.
  • another lighting configuration may be selected as indicated at 440, and the method returns to element 420 to capture and check additional images.
  • the images may be processed by the algorithm as indicated at 450.
  • the method returns to element 420. Otherwise, the method is done.
  • Embodiments of methods and apparatus for biometric authentication are described in which two or more biometric features or aspects are captured and analyzed individually or in combination to identify and authenticate a person.
  • biometric authentication has been performed using a single biometric feature. For example, an image of a person’s iris is captured and compared to a baseline image of the user’s iris to identify and authenticate the person.
  • an imaging system for example as illustrated in FIGS.
  • Embodiments may improve the performance of biometric authentication systems, and may help to reduce false positives and false negatives by the biometric authentication algorithms, when compared to conventional systems that rely on only one feature for biometric authentication.
  • Embodiments may be especially advantageous in imaging systems that have challenging hardware constrains (point of view, distortions, etc.) for individual biometric aspects or features (e.g., the iris) as additional biometric features (e.g., veins in the eye, portions or features of the periorbital region, or features of other parts of the face) may be used for biometric authentication if good images of one or more of the biometric features cannot be captured at a particular pose or under current conditions.
  • biometric aspects or features e.g., the iris
  • additional biometric features e.g., veins in the eye, portions or features of the periorbital region, or features of other parts of the face
  • the biometric aspects that are used may include one or more of facial, periocular, or eye aspects.
  • one or more different features may be used to describe or characterize the aspect; the different features may, for example, include geometric features, qualitative features, and low-level, intermediate, or high-level 3D representations.
  • the biometric aspects and features may include, but are not limited to, one or more of the eye surface, eye veins, eyelids, eyebrows, skin features, and nose features, as well as features of the iris such as color(s), pattern(s), and 3D musculature.
  • feature sizes and geometric relationships to other features may be included as biometric aspects.
  • FIGS. 5A and 5B illustrate a biometric authentication system that combines different biometric aspects, according to some embodiments.
  • FIG. 5A illustrates an example imaging system that combines different biometric aspects, according to some embodiments.
  • the imaging system may include, but is not limited to, one or more cameras 540, an illumination source 530, and a controller 560.
  • the eye camera 540 is pointed towards the eye 592, periorbital region 580, and portions of the face 582 to receive reflected light from the illumination source 530.
  • the eye camera 540 may image a reflection off a hot mirror as shown in FIG. IB.
  • the eye camera 540 may image the user’s facial region including the eye 592 through one or more intermediate optical elements as shown in FIG. 1C.
  • the eye camera(s) 540 may capture 542 individual images of, or images that include, two or more biometric aspects of the eye 592, periorbital region 580, and portions of the face 582.
  • the captured image(s) may be processed by controller 560 to analyze the quality of two or more of the biometric aspects captured in the image(s).
  • the controller 560 may select a best biometric aspect or feature from the images to be used for biometric authentication, or may select two or more of the biometric aspects or features to be used in combination for biometric authentication.
  • FIG. 5B is an illustration of the iris 594 and pupil 596 of the eye.
  • features of the iris 594 such as color(s), pattern(s), and a 3D reconstruction of muscle patterns in the iris 594 based on two or more images may be used as biometric aspects or features.
  • An iris 594 feature may be used alone, in combination with one or more iris 594 features, or in combination with one or more other features of the eye 592, periorbital region 580, or face 582 to perform biometric authorization.
  • FIG. 6 is a flowchart of a method for performing biometric authentication using multiple biometric aspects, according to some embodiments.
  • one or more images of the user’s eye region e.g., iris 594, eye 592, periorbital region 580, and/or face 582
  • the images may be processed to extract two or more biometric aspects of the user’s iris 594, eye 592, periorbital region 580, and/or face 582.
  • one or more of the biometric aspects may be selected for authentication.
  • biometric authentication may then be performed using the selected biometric aspect(s).
  • Embodiments of methods and apparatus for biometric authentication are described in which two or more cameras are used to capture images of biometric features or aspects for analysis to identify and authenticate a person.
  • biometric authentication has been performed using a single camera to capture images of biometric features. For example, an image of a person’s iris is captured by a single eye camera and compared to a baseline image of the user’s iris to identify and authenticate the person.
  • an imaging system for example as illustrated in FIGS.
  • 1 A through ID includes at least two cameras that are used to capture images of a person’s iris, eye, periorbital region, and/or other regions of the person’s face, and one or more features from the captured images are analyzed to identify and authenticate the person (or to detect attempts to spoof the biometric authentication).
  • Embodiments may, for example, be used to capture images of the user’s iris using two or more eye cameras for biometric authentication.
  • two or more cameras may be used to capture biometric aspects or features of the eye, periorbital region, or user’s face including but not limited to the eye surface, eye veins, eyelid, eye brows, skin, or nose, and use the biometrics alone or in combination to perform biometric authentication.
  • feature sizes and geometric relations to other features may be included as biometric aspects.
  • Embodiments of biometric systems or algorithms may use images from at least one of the two or more cameras (two or more per eye, in some systems) that capture images from different viewpoints of the user’s eye, periorbital region, or face to perform biometric authentication.
  • a single camera is pointed directly at the eye region.
  • the optical path to the target region may be more complex, with other elements such as lenses or hot mirrors on or near the optical path, and thus the visibility of target aspects or features may be impaired, and the quality of the captured images may be less than optimal for the biometric authentication algorithms.
  • Adding at least one additional camera per eye may, for example, allow the imaging system to capture images of the eye region from different angles, and allow for switching to a more favorable point of view (pose as location and orientation), and in some embodiments may allow for two or more images captured by two or more cameras to be combined for use in biometric authentication.
  • an algorithm executing on a controller coupled to the two more cameras may dynamically determine which image(s) captured by the two or more cameras are to be used for biometric authentication, for example using one or more objective criteria to evaluate the quality of the captured images.
  • the objective criteria may include one or more of, but are not limited to, exposure, contrast, shadows, edges, undesirable streaks, occluding objects, sharpness, uniformity of illumination, absence of undesired reflections, etc.
  • properties of the region being captured by a camera may be evaluated to determine quality, for example an overlap of a part of the eye by an eyelid may obscure at least part of a feature in an image captured by one camera while the feature is more visible in an image captured by a second camera.
  • an algorithm executing on a controller coupled to the two more cameras may combine information from two or more images of an eye, the periorbital region, or portions of the face captured by at least two cameras to process aspects and features extracted from the combined images.
  • the combination of information from two or more images may be performed at different stages of processing. For example, in some embodiments, two or more images may be combined early in processing to enhance the image quality of the resulting combined image from which aspects or features are extracted and evaluated. As another example, two or more images may be processed to extract aspects, features or other information in an intermediate stage; the extracted information may then be processed in combination to determine a biometric authentication score. As yet another example, the information extracted from two or more images may be processed separately, and then combined in the computation of a final similarity/matching score.
  • FIG. 7 illustrates a biometric authentication system that uses multiple cameras, according to some embodiments.
  • An imaging system may include, but is not limited to, two or more cameras 740, an illumination source 730, and a controller 760.
  • the eye cameras 540 are each pointed towards the eye 792, periorbital region 780, and/or portions of the face 782 to receive reflected light from the illumination source 730.
  • Each camera 740 has a different perspective or viewing angle. Also note that, while not shown, each camera 740 may center on or capture a different feature, aspect, or region of the user’s face or eye 792.
  • at least one eye camera 740 may image a reflection off a hot mirror as shown in FIG. IB.
  • At least one eye camera 740 may image the user’s facial region including the eye 792 through one or more intermediate optical elements as shown in FIG. 1C.
  • Each eye camera 740 may capture 742 images of, or images that include, one or more biometric aspects of the eye 792, periorbital region 780, and portions of the face 782.
  • the images captured by the two or more cameras 740 may be processed by controller 760 to analyze the quality the image(s).
  • the controller 560 may select one or more of the images to be used for biometric authentication, or may select two or more of the biometric aspects or features from one or more of the images to be used in combination for biometric authentication.
  • FIG. 8 A is a flowchart of a method for biometric authentication using multiple cameras, according to some embodiments.
  • two or more images of the user’s eye, periorbital region, or portions of the user’ s face are captured by two or more cameras.
  • the captured images are analyzed using one or more objective criteria to determine a best image to use for biometric authentication.
  • biometric authentication is performed using the selected image.
  • FIG. 8B is a flowchart of another method for biometric authentication using multiple cameras, according to some embodiments.
  • two or more images of the user’s eye, periorbital region, or portions of the user’s face are captured by two or more cameras.
  • information from two or more of the images is merged or combined.
  • biometric authentication is performed using the merged image information.
  • the merging of information from two or more images may be performed at different stages of processing. For example, in some embodiments, two or more images may be combined early in processing to enhance the image quality of the resulting combined image from which aspects or features are to be extracted and evaluated. As another example, two or more images may be processed to extract aspects, features or other information in an intermediate stage; the extracted information may then be processed in combination to determine a biometric authentication score. As yet another example, the information extracted from two or more images may be processed separately, and then combined in the computation of a biometric authentication score.
  • Biometric imaging systems including additional optical elements
  • Embodiments of methods and apparatus for biometric authentication are described in which one or more additional optical elements are on the optical path from the illumination system, to the eye or eye region, and then to the eye camera.
  • one or more optical elements such as a lens 120 as shown in FIG. 1C may be on the optical path between the eye 192 and the camera 140.
  • the optical element may have optical properties; in some embodiments the optical properties may be particular to a user, such as diopter.
  • a user may add an extra optical element, such as prescription clip-on lens, to the device’s optical system.
  • the intervening optical element(s) necessarily affect light that passes through the element(s) to the camera.
  • information about the optical properties of the intervening optical element(s) may be obtained and stored, and the controller may adjust images captured by the camera(s) according to the information to improve image quality for use in biometric authentication.
  • one or more optical elements such as lenses, prisms or waveguides may be located on the optical path of the eye camera, for example in front of the camera and between the camera and the eye/eye region.
  • an eye camera may view the eye or eye region from a non-optical angle due to the physical configuration and limitations of the device the imaging system is implemented in.
  • An image plane formed at the camera at the non-optical angle may affect the quality of the captured images, for example by reducing pixel density.
  • An optical element such as a lens, prism or waveguide on the optical path between the eye/eye region and the eye camera may, for example, be used to “bend” the light rays coming from the eye/eye region, and thus tilt the image plane, to obtain better pixel density at the eye camera.
  • the intervening optical element may compensate for perspective distortion caused by the camera’s position.
  • the intervening optical element may thus increase or improve the image space properties of the imaging system.
  • FIG. 9A illustrates a system that includes at least one additional optical element on the light path between the user’ s eye and the eye camera, according to some embodiments.
  • An imaging system may include, but is not limited to, one or more cameras 940, an illumination source 930, and a controller 960.
  • the eye camera 940 is pointed towards the eye 992; note, however, that an eye camera 940 may also or instead capture images of the periorbital region or portions of the face to receive reflected light from the illumination source 930.
  • the eye camera 940 may image a reflection off a hot mirror as shown in FIG. IB.
  • the eye camera 940 may image the user’s facial region including the eye 992 through one or more intermediate optical elements 920A and 920B.
  • Element 920A represents a lens that is a component of an optical system implemented in the device, and may, but does not necessarily, have optical properties particular to a user.
  • Element 920B represents an optional optical element, such as a clip-on lens, that has been added to an optical system implemented in the device, and may, but does not necessarily, have optical properties particular to a user.
  • the eye camera(s) 940 may capture 942 individual images of, or images that include, two or more biometric aspects of the eye 992, periorbital region 980, and portions of the face 982.
  • the optical path from the eye region to the eye camera(s) 940 passes through the intervening optical element 920A and/or optical element 920B.
  • the intervening optical elements 920A and/or 920B necessarily affect light that passes through the element(s) to the camera 940.
  • information about the optical properties of the intervening optical element(s) may be obtained and stored to memory 970, and the controller 960 may adjust images captured by the camera(s) 940 according to the information to improve image quality for use in biometric authentication.
  • the captured image(s) may be further processed by controller 960 to analyze the quality of one or more of the biometric aspects captured in the image(s).
  • the image(s) or biometric aspect(s) or features(s) extracted from the image(s) may then be used in a biometric authentication process.
  • FIG. 9B illustrates a system that includes a diffractive optical element on the light path between the user’s eye and the eye camera to improve the viewing angle of the camera, according to some embodiments.
  • An imaging system may include, but is not limited to, one or more cameras 940, an illumination source 930, and a controller 960.
  • the eye camera 940 is pointed towards the eye 992; note, however, that an eye camera 940 may also or instead capture images of the periorbital region or portions of the face to receive reflected light from the illumination source 930.
  • the eye camera 940 may image a reflection off a hot mirror as shown in FIG. IB.
  • the eye camera 940 may, but does not necessarily image the user’s facial region including the eye 992 through one or more intermediate optical elements 920.
  • the eye camera(s) 940 may capture 942 individual images of, or images that include, two or more biometric aspects of the eye 992, periorbital region 980, and portions of the face 982.
  • One or more optical elements 924 such as lenses, prisms or waveguides may be located on the optical path of the eye camera 940, for example in front of the camera 940 and between the camera 940 and the eye 992.
  • an eye camera 940 may view the eye 992 or eye region from a non-optical angle due to the physical configuration and limitations of the device the imaging system is implemented in.
  • An image plane formed at the camera 940 at the non-optical angle may affect the quality of the captured images, for example by reducing pixel density.
  • An optical element 924 such as a lens, prism or waveguide on the optical path between the eye 992 and the eye camera 940 may, for example, be used to “bend” the light rays coming from the eye 992, and thus tilt the image plane, to obtain better pixel density at the eye camera 940.
  • the intervening optical element 924 may compensate for perspective distortion caused by the camera 940’ s position.
  • the intervening optical element 924 may thus increase or improve the image space properties of the imaging system.
  • FIG. 10 is a flowchart of a method for processing images in a system that includes at least one additional optical element on the light path between the user’s eye and the eye camera, according to some embodiments. As indicated at 1000, properties of one or more additional optical elements on the optical path between the eye camera and the eye or eye region may be obtained and stored as optical element descriptions to memory. As indicated at 1010, one or more images of the eye or eye region may be captured with the eye camera.
  • the captured images may be processed by the controller; the optical element description(s) may be applied to the images to adjust the image processing according to the optical properties of the additional optical element(s).
  • the method ends. Otherwise the method returns to element 1010.
  • FIG. 11 is a flowchart of a method for capturing and processing images in a system that includes a diffractive optical element on the light path between the user’s eye and the eye camera to improve the viewing angle of the camera, according to some embodiments.
  • light sources e.g., LEDs
  • a portion of the light reflected off the subject's face is diffracted towards the camera by an optical element on the optical path between the subject’s eye and the camera.
  • one or more images are captured by the camera.
  • the images are processed, for example by a biometric authentication algorithm on a controller of the device that includes the imaging system.
  • the method ends. Otherwise the method returns to element 1100.
  • Embodiments of methods and apparatus for biometric authentication and anti-spoofing are described in which two or more different wavelengths are used in the illumination system.
  • the illumination source e.g. a ring of LEDs
  • the illumination source may be configured to emit light at two or more different wavelengths, either continuously or selectively.
  • a wavelength in the mid-800 nm range may be used for biometric authentication using the iris
  • a wavelength in the mid-900 mm range may be used for anti-spoofing.
  • Antispoofing is related to biometric authentication in that “spoofing” refers to attempts to trick a biometric authentication system by, for example, presenting a picture or model of a valid user’s eye, eye region, or face as an attempt to “spoof’ the biometric authentication system.
  • a method may be implemented in which a first wavelength is emitted by the illumination source for capturing an image or images for a first portion of algorithmic processing for biometric authentication, and a second wavelength is emitted by the illumination source for capturing another image or images for a second portion of algorithmic processing for biometric authentication.
  • FIGS. 12A through 12C illustrate a system that includes light sources that emit light at multiple wavelengths to sequentially capture images at the multiple wavelengths, according to some embodiments.
  • FIG. 12A shows an example illumination source 1230 that includes multiple LEDs 1232.
  • other light-emitting elements than LEDs may be used.
  • some of the LEDs 1232A, represented by the shaded circles, may be configured to emit light at a first wavelength in the IR (including SWIR or NIR) range, for example at 740, 750, 840, 850, 940, or 950 nanometers.
  • the other LEDs 1232B may be configured to emit light at a different wavelength in the IR (including SWIR or NIR) range. Note that, in some embodiments, more than two wavelengths may be used. Further, in some embodiments, individual lighting elements may be configured to selectively emit light at two or more different wavelengths.
  • FIGS. 12B and 12C illustrate an example imaging system that includes light sources (e.g., LEDs) that emit light at multiple wavelengths, according to some embodiments.
  • the imaging system may include, but is not limited to, one or more cameras 1240, an illumination source 1230, and a controller 1260.
  • the eye camera 1240 is pointed towards the eye 1292 to receive reflected light from the illumination source 1230.
  • the eye camera 1240 may instead or also capture images of the periorbital region and portions of the face.
  • the eye camera 1240 may image a reflection off a hot mirror as shown in FIG. IB.
  • the eye camera 1240 may image the eye 1292 through one or more intermediate optical elements as shown in FIG. 1C.
  • the eye camera(s) 1240 may capture 1242 A individual images of the eye 1292 with LEDS 1232A illuminating the eye at a first wavelength under control 1244A of the controller 1260.
  • the eye camera(s) 1240 may capture 1242B individual images of the eye 1292 with LEDS 1232B illuminating the eye at a second wavelength under control 1244B of the controller 1260.
  • the captured images may be processed by controller 1260 to analyze the quality of one or more of the biometric aspects captured in the images.
  • the controller 1260 may select a best biometric aspect or feature from the images to be used for biometric authentication, or may select two or more biometric aspects or features to be used in combination for biometric authentication.
  • the first wavelength may be emitted by the illumination source 1230 for capturing an image or images for a first portion of algorithmic processing for biometric authentication
  • the second wavelength may be emitted by the illumination source 1230 for capturing another image or images for a second portion of algorithmic processing for biometric authentication.
  • the first wavelength may be used to capture images (e.g., of the iris) for use in an anti-spoofing process
  • the second wavelength may be used to capture images (e.g., of the iris) for use in biometric authentication.
  • FIGS. 13 A and 13B illustrate a system that includes a camera with a photosensor that concurrently captures multiple images at different wavelengths, according to some embodiments.
  • a camera sensor 1350 may be provided that is configured to concurrently capture two (or more) images at different wavelengths.
  • every other pixel is configured to capture light at a particular wavelength.
  • the white pixels 1352A may be configured to capture light in the mid-800 nm range, and the shaded pixels may be configured to capture light in the mid-900 range.
  • individual filters may be applied to each pixel 1352, with a first filter applied to pixels 1352A and a second filter applied to pixels 1352B.
  • FIG. 13B illustrates an example imaging system that includes light sources (e.g., LEDs) that emit light at multiple wavelengths, and in which the camera includes a camera sensor 1350 that is configured to concurrently capture two (or more) images at different wavelengths, according to some embodiments.
  • the imaging system may include, but is not limited to, one or more cameras 1340, an illumination source 1330, and a controller 1360.
  • the eye camera 1340 is pointed towards the eye 1392 to receive reflected light from the illumination source 1330.
  • the eye camera 1340 may instead or also capture images of the periorbital region and portions of the face. Note that in some embodiments, the eye camera 1340 may image a reflection off a hot mirror as shown in FIG.
  • the eye camera 1340 may image the eye 1392 through one or more intermediate optical elements as shown in FIG. 1C.
  • the illumination source 1330 may be configured to emit light at multiple wavelengths, for example as illustrated in FIG. 12A.
  • the eye camera(s) 1340 may concurrently capture at least two images 1342A and 1342B of the eye 1392 at the multiple wavelengths using a sensor 1350 as illustrated in FIG. 13 A with LEDS 1332 A and 1332B concurrently illuminating the eye 1392 at both wavelengths under control 1344 of the controller 1360.
  • FIG. 14 is a flowchart of a method for sequentially capturing and processing images at multiple wavelengths, according to some embodiments.
  • light sources emit light at a first wavelength towards the user's eyes.
  • the camera captures images at the first wavelength.
  • the light sources emit light at a second wavelength towards the user's eyes.
  • the camera captures images at the second wavelength.
  • the images are processed.
  • the method if the method is not done, then the method returns to element 1410. Otherwise, the method ends.
  • FIG. 15 is a flowchart of a method for concurrently capturing and processing images at multiple wavelengths, according to some embodiments.
  • light sources emit light at multiple wavelengths towards the user's eyes.
  • the camera concurrently captures images for each wavelength, for example using a photosensor 1350 as illustrated in FIG. 13A.
  • the images are processed.
  • the method if the method is not done, then the method returns to element 1510. Otherwise, the method ends.
  • Embodiments of methods and apparatus for biometric authentication are described in which a current eye pose is determined and evaluated to determine if the current pose is satisfactory, and in which the eye pose may be improved by the user manually adjusting the device or their pose/gaze direction in response to a signal from the controller, and/or in which the imaging system is mechanically adjusted at the direction of the controller to improve the current view of the eye.
  • a method executed on the controller may identify the user’s current eye location and/or orientation (pose), for example by capturing and evaluating one or more images of the eye(s). The controller may then evaluates how beneficial the current pose is for biometric authentication.
  • the controller may provide feedback to the user to prompt the user to adjust their pose (e.g., by changing the direction of their gaze) or to manually adjust the device (e.g., by manually moving the device’s position in relation to their eyes).
  • the controller may direct the imaging system hardware to mechanically adjust the imaging system, for example by slightly moving or tilting the camera, or by zooming in or out.
  • Adjusting the pose of the user with respect to the imaging system manually or mechanically may ensure a desired level biometric authentication performance, as better images of the eye or eye region may be captured.
  • Feedback to the user may be a haptic, audio, or visual signal, or a combination of two or more haptic, audio, or visual signals.
  • the automatic adjustment of the imaging system directed by the controller may move a component or a combination of components, for example a module that includes at least the camera.
  • the manual or automatic adjustments may be a single step in the biometric authentication process, or alternative may be performed in a control loop until certain qualities or objective criteria are achieved in the captured images.
  • FIG. 16 illustrates a system that provides feedback to the user and/or control signals to the imaging system to manually or mechanically adjust the viewing angle of the camera with respect to the user’s eye or periocular region, according to some embodiments.
  • the imaging system may include, but is not limited to, one or more cameras 1640, an illumination source 1630, and a controller 1660.
  • the eye camera 1640 is pointed towards the eye 1692 to receive reflected light from the illumination source 1630.
  • the eye camera 1640 may instead or also capture images of the periorbital region and/or portions of the face. Note, however, in some embodiments, the eye camera 1640 may image a reflection off a hot mirror as shown in FIG. IB.
  • the eye camera 1640 may image the user’s eye 1692 through one or more intermediate optical elements as shown in FIG. 1C.
  • the eye camera(s) 1640 may capture 1642 one or more images of the user’s eye 1692.
  • the captured image(s) may be processed by controller 1660 to determine a current eye pose and to determine if the current eye pose is satisfactory for the biometric authentication process. If the eye pose is not satisfactory, then the controller 1660 may provide feedback 1662 to the user to prompt the user to change their eye pose and/or to manually adjust the device.
  • the controller 1660 may signal 1646 the imaging system to mechanically adjust the imaging system, for example by moving or tilting the camera 1640.
  • FIG. 17 is a flowchart of a method for providing feedback to the user to manually adjust the viewing angle of the camera with respect to the user’s eye or periocular region, according to some embodiments.
  • the method may, for example, be performed in a biometric authentication process.
  • the camera captures image(s) of the user's eye region.
  • the controller determines from the image(s) if the alignment of the camera with the desired feature(s) is good.
  • the controller may prompt the user to adjust the gaze and/or to manually adjust the device to obtain a better viewing angle, and the method returns to element 1700.
  • the alignment is good, then one or more image(s) may be processed as indicated at 1740.
  • the method returns to 1700. Otherwise, the method is done.
  • FIG. 18 is a flowchart of a method for providing control signals to the imaging system to mechanically adjust the viewing angle of the camera with respect to the user’s eye or periocular region, according to some embodiments.
  • the method may, for example, be performed in a biometric authentication process.
  • the camera captures image(s) of the user's eye region.
  • the controller determines from the image(s) if the alignment of the camera with the desired feature(s) is good.
  • the controller may signal the imaging system to mechanically adjust the device/camera to obtain a better viewing angle, and the method returns to element 1800.
  • the alignment is good, then one or more image(s) may be processed as indicated at 1840.
  • the method returns to 1800. Otherwise, the method is done.
  • FIGS. 19A and 19B are block diagrams illustrating a device that may include components and implement methods as illustrated in FIGS. 1 through 18, according to some embodiments.
  • An example application of the methods for improving the performance of imaging systems used in biometric authentication processes as described herein is in a handheld device 3000 such as smartphone, pad, or tablet.
  • FIG. 19A shows a side view of an example device 3000
  • FIG. 19B shows an example top view of the example device 3000.
  • Device 3000 may include, but is not limited to, a display screen (not shown), a controller 3060 comprising one or more processors, memory 3070, pose, motion, and orientation sensors (not shown), and one or more cameras or sensing devices such as visible light cameras and depth sensors (not shown).
  • a camera 3080 and illumination source 3040 as described herein may be attached to or integrated in the device 3000, and the device 3000 may be held and positioned by the user so that the camera 3080 can capture image(s) of the user’s eye or eye region while illuminated by the illumination source 3050.
  • the captured images may, for example, be processed by controller 3060 to authenticate the person, for example via an iris authentication process.
  • device 3000 as illustrated in FIGS. 19A and 19B is given by way of example, and is not intended to be limiting.
  • shape, size, and other features of a device 3000 may differ, and the locations, numbers, types, and other features of the components of a device 3000 may vary.
  • FIG. 20 illustrates an example head-mounted device (HMD) that may include components and implement methods as illustrated in FIGS. 1 through 18, according to some embodiments.
  • the HMD 4000 may, for example be a component in a mixed or augmented reality (MR) system.
  • MR mixed or augmented reality
  • HMD 4000 as illustrated in FIG. 20 is given by way of example, and is not intended to be limiting.
  • the shape, size, and other features of an HMD 4000 may differ, and the locations, numbers, types, and other features of the components of an HMD 4000 may vary.
  • HMD 4000 may include, but is not limited to, a display and two optical lenses (eyepieces) (not shown), mounted in a wearable housing or frame. As shown in FIG.
  • HMD 4000 may be positioned on the user’s head 4090 such that the display and eyepieces are disposed in front of the user’s eyes 4092. The user looks through the eyepieces 4020 onto the display. HMD 4000 may also include sensors that collect information about the user’s environment (video, depth information, lighting information, etc.) and about the user (e.g., eye tracking sensors).
  • the sensors may include, but are not limited to one or more eye cameras 4040 (e.g., infrared (IR) cameras) that capture views of the user’s eyes 4092, one or more scene (visible light) cameras (e.g., RGB video cameras) that capture images of the real world environment in a field of view in front of the user (not shown), and one or more ambient light sensors that capture lighting information for the environment (not shown).
  • eye cameras 4040 e.g., infrared (IR) cameras
  • scene (visible light) cameras e.g., RGB video cameras
  • ambient light sensors that capture lighting information for the environment (not shown).
  • a controller 4060 for the MR system may be implemented in the HMD 4000, or alternatively may be implemented at least in part by an external device (e.g., a computing system) that is communicatively coupled to HMD 4000 via a wired or wireless interface.
  • Controller 4060 may include one or more of various types of processors, image signal processors (ISPs), graphics processing units (GPUs), coder/decoders (codecs), and/or other components for processing and rendering video and/or images.
  • Controller 4060 may render frames (each frame including a left and right image) that include virtual content based at least in part on inputs obtained from the sensors, and may provide the frames to the display.
  • FIG. 21 further illustrates components of an HMD and MR system, according to some embodiments.
  • an imaging system for the MR system may include, but is not limited to, one or more eye cameras 4040 and an IR light source 4030.
  • IR light source 4030 e.g., IR LEDs
  • HMD 4000 e.g., around the eyepieces 4020, or elsewhere in the HMD 4000 to illuminate the user’s eyes 4092 with IR light.
  • At least one eye camera 4040 (e.g., an IR camera, for example a 400x400 pixel count camera or a 600x600 pixel count camera, that operates at 850nm or 940nm, or at some other IR wavelength or combination of wavelengths, and that captures frames, for example at a rate of 60-120 frames per second (FPS)), is located at each side of the user 4090’ s face.
  • the eye cameras 4040 may be positioned in the HMD 4000 on each side of the user 4090’ s face to provide a direct view of the eyes 4092, a view of the eyes 4092 through the eyepieces 4020, or a view of the eyes 4092 via reflection off hot mirrors or other reflective components.
  • FIG. 20 shows a single eye camera 4040 located on each side of the user 4090’ s face, in some embodiments there may be two or more eye cameras 4040 on each side of the user 4090’ s face.
  • a portion of IR light emitted by light source(s) 4030 reflects off the user 4090’ s eyes and is captured by the eye cameras 4040 to image the user’s eyes 4092. Images captured by the eye cameras 4040 may be analyzed by controller 4060 to detect features (e.g., pupil), position, and movement of the user’s eyes 4092, and/or to detect other information about the eyes 4092 such as pupil dilation.
  • features e.g., pupil
  • the point of gaze on the display may be estimated from the eye tracking; the estimated point of gaze may be used to cause the scene camera(s) of the HMD 4000 to expose images of a scene based on a region of interest (ROI) corresponding to the point of gaze
  • ROI region of interest
  • the estimated point of gaze may enable gaze-based interaction with content shown on the display.
  • brightness of the displayed images may be modulated based on the user’s pupil dilation as determined by the imaging system.
  • the HMD 4000 may implement one or more of the methods for improving the performance of the imaging systems used in biometric authentication or gaze tracking processes as illustrated in FIGS. 1 through 18 to capture and process images of the user’s eyes 4090.
  • Embodiments of an HMD 4000 as illustrated in FIG. 20 may, for example, be used in augmented or mixed (AR) applications to provide augmented or mixed reality views to the user 4090.
  • HMD 4000 may include one or more sensors, for example located on external surfaces of the HMD 4000, which collect information about the user 4090’ s external environment (video, depth information, lighting information, etc.); the sensors may provide the collected information to controller 4060 of the MR system.
  • the sensors may include one or more visible light cameras (e.g., RGB video cameras) that capture video of the user’s environment that may be used to provide the user 4090 with a virtual view of their real environment.
  • video streams of the real environment captured by the visible light cameras may be processed by the controller 4060 of the HMD 4000 to render augmented or mixed reality frames that include virtual content overlaid on the view of the real environment, and the rendered frames may be provided to the HMD 4000’ s display system.
  • FIG. 21 is a block diagram illustrating an example MR system that may include components and implement methods as illustrated in FIGS. 1 through 18, according to some embodiments.
  • a MR system may include an HMD 5000 such as a headset, helmet, goggles, or glasses.
  • HMD 5000 may implement any of various types of display technologies.
  • the HMD 5000 may include a display system that displays frames including left and right images on screens or displays (not shown) that are viewed by a user through eyepieces (not shown).
  • the display system may, for example, be a DLP (digital light processing), LCD (liquid crystal display), or LCoS (liquid crystal on silicon) technology display system.
  • objects at different depths or distances in the two images may be shifted left or right as a function of the triangulation of distance, with nearer objects shifted more than more distant objects.
  • 3D three-dimensional
  • HMD 5000 may include a controller 5060 configured to implement functionality of the MR system and to generate frames (each frame including a left and right image) that are provided to the HMD’s displays.
  • HMD 5000 may also include a memory 5062 configured to store software (code 5064) of the MR system that is executable by the controller 5060, as well as data 5068 that may be used by the MR system when executing on the controller 5060.
  • HMD 5000 may also include one or more interfaces (e.g., a Bluetooth technology interface, USB interface, etc.) configured to communicate with an external device via a wired or wireless connection.
  • the external device may be or may include any type of computing system or computing device, such as a desktop computer, notebook or laptop computer, pad or tablet device, smartphone, handheld computing device, game controller, game system, and so on.
  • controller 5060 may be a uniprocessor system including one processor, or a multiprocessor system including several processors (e.g., two, four, eight, or another suitable number). Controller 5060 may include central processing units (CPUs) configured to implement any suitable instruction set architecture, and may be configured to execute instructions defined in that instruction set architecture. For example, in various embodiments controller 5060 may include general-purpose or embedded processors implementing any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, RISC, or MIPS ISAs, or any other suitable ISA. In multiprocessor systems, each of the processors may commonly, but not necessarily, implement the same ISA.
  • ISAs instruction set architectures
  • each of the processors may commonly, but not necessarily, implement the same ISA.
  • Controller 5060 may employ any microarchitecture, including scalar, superscalar, pipelined, superpipelined, out of order, in order, speculative, non- speculative, etc., or combinations thereof. Controller 5060 may include circuitry to implement microcoding techniques. Controller 5060 may include one or more processing cores each configured to execute instructions. Controller 5060 may include one or more levels of caches, which may employ any size and any configuration (set associative, direct mapped, etc.). In some embodiments, controller 5060 may include at least one graphics processing unit (GPU), which may include any suitable graphics processing circuitry. Generally, a GPU may be configured to render objects to be displayed into a frame buffer (e.g., one that includes pixel data for an entire frame).
  • GPU graphics processing unit
  • a GPU may include one or more graphics processors that may execute graphics software to perform a part or all of the graphics operation, or hardware acceleration of certain graphics operations.
  • controller 5060 may include one or more other components for processing and rendering video and/or images, for example image signal processors (ISPs), coder/decoders (codecs), etc.
  • ISPs image signal processors
  • codecs coder/decoders
  • Memory 5062 may include any type of memory, such as dynamic random access memory (DRAM), synchronous DRAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) SDRAM (including mobile versions of the SDRAMs such as mDDR3, etc., or low power versions of the SDRAMs such as LPDDR2, etc ), RAMBUS DRAM (RDRAM), static RAM (SRAM), etc.
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • SDRAM double data rate SDRAM
  • DDR, DDR2, DDR3, etc. double data rate SDRAM (including mobile versions of the SDRAMs such as mDDR3, etc., or low power versions of the SDRAMs such as LPDDR2, etc ), RAMBUS DRAM (RDRAM), static RAM (SRAM), etc.
  • one or more memory devices may be coupled onto a circuit board to form memory modules such as single inline memory modules (SIMMs), dual inline memory modules (DIMMs), etc.
  • the HMD 5000 may include one or more sensors that collect information about the user’s environment (video, depth information, lighting information, etc.).
  • the sensors 500 may provide the information to the controller 5060 of the MR system.
  • the sensors may include, but are not limited to, visible light cameras (e.g., video cameras) and ambient light sensors.
  • HMD 5000 may be positioned on the user’s head such that the displays and eyepieces are disposed in front of the user’s eyes 5092A and 5092B.
  • IR light sources 5030A and 5030B e.g., IR LEDs
  • HMD 5000 may be positioned on the user’s head such that the displays and eyepieces are disposed in front of the user’s eyes 5092A and 5092B.
  • IR light sources 5030A and 5030B e.g., IR LEDs
  • Eye cameras 5040A and 5040B may be located at each side of the user’s face.
  • the eye cameras 5040 may be positioned in the HMD 5000 to provide a direct view of the eyes 5092, a view of the eyes 5092 through the eyepieces 5020, or a view of the eyes 5092 via reflection off hot mirrors or other reflective components.
  • eye cameras 5040A and 5040B are given by way of example, and is not intended to be limiting. In some embodiments, there may be a single eye camera 5040 located on each side of the user’s face. In some embodiments there may be two or more eye cameras 5040 on each side of the user’s face. For example, in some embodiments, a wide-angle camera 5040 and a narrower-angle camera 5040 may be used on each side of the user’s face.
  • a portion of IR light emitted by light sources 5030A and 5030B reflects off the user’s eyes 5092A and 5092B is received at respective eye cameras 5040A and 5040B, and is captured by the eye cameras 5040A and 5040B to image the user’s eyes 5092A and 5092B.
  • Eye information captured by the cameras 5040A and 5040B may be provided to the controller 5060.
  • the controller 5060 may analyze the eye information (e.g., images of the user’s eyes 5092A and 5092B) to determine eye position and movement and/or other features of the eyes 5092A and 5092B.
  • the controller 5060 may perform a 3D reconstruction using images captured by the eye cameras 5040A and 5040B to generate 3D models of the user’s eyes 5092A and 5092B.
  • the 3D models of the eyes 5092A and 5092B indicate the 3D position of the eyes 5092A and 5092B with respect to the eye cameras 5040A and 5040, which allows eye tracking algorithms executed by the controller to accurately track eye movement.
  • the HMD 4000 may implement one or more of the methods for improving the performance of the imaging systems used in biometric authentication or gaze tracking processes as illustrated in FIGS. 1 through 18 to capture and process images of the user’s eyes 4090.
  • the eye information obtained and analyzed by the controller 5060 may be used by the controller in performing various VR or AR system functions.
  • the point of gaze on the displays may be estimated from images captured by the eye cameras 5040A and 5040B; the estimated point of gaze may be used to cause the scene camera(s) of the HMD 5000 to expose images of a scene based on a region of interest (RO I) corresponding to the point of gaze.
  • the estimated point of gaze may enable gaze-based interaction with virtual content shown on the displays.
  • brightness of the displayed images may be modulated based on the user’s pupil dilation as determined by the imaging system.
  • the HMD 5000 may be configured to render and display frames to provide an augmented or mixed reality (MR) view for the user based at least in part according to sensor inputs.
  • the MR view may include renderings of the user’s environment, including renderings of real objects in the user’s environment, based on video captured by one or more video cameras that capture high-quality, high-resolution video of the user’s environment for display.
  • the MR view may also include virtual content (e.g., virtual objects, virtual tags for real objects, avatars of the user, etc.) generated by MR system and composited with the displayed view of the user’ s real environment.
  • Embodiments of the HMD 5000 as illustrated in FIG. 21 may also be used in virtual reality (VR) applications to provide VR views to the user.
  • the controller 5060 of the HMD 5000 may render or obtain virtual reality (VR) frames that include virtual content, and the rendered frames may be displayed to provide a virtual reality (as opposed to mixed reality) experience to the user.
  • rendering of the VR frames may be affected based on the point of gaze determined from the imaging system.
  • a person can interact with and/or sense a physical environment or physical world without the aid of an electronic device.
  • a physical environment can include physical features, such as a physical object or surface.
  • An example of a physical environment is physical forest that includes physical plants and animals.
  • a person can directly sense and/or interact with a physical environment through various means, such as hearing, sight, taste, touch, and smell.
  • a person can use an electronic device to interact with and/or sense an extended reality (XR) environment that is wholly or partially simulated.
  • the XR environment can include mixed reality (MR) content, augmented reality (AR) content, virtual reality (VR) content, and/or the like.
  • an XR system some of a person’s physical motions, or representations thereof, can be tracked and, in response, characteristics of virtual objects simulated in the XR environment can be adjusted in a manner that complies with at least one law of physics.
  • the XR system can detect the movement of a user’s head and adjust graphical content and auditory content presented to the user similar to how such views and sounds would change in a physical environment.
  • the XR system can detect movement of an electronic device that presents the XR environment (e.g., a mobile phone, tablet, laptop, or the like) and adjust graphical content and auditory content presented to the user similar to how such views and sounds would change in a physical environment.
  • the XR system can adjust characteristic(s) of graphical content in response to other inputs, such as a representation of a physical motion (e.g., a vocal command).
  • HUDs heads-up displays
  • head mountable systems projection-based systems
  • windows or vehicle windshields having integrated display capability
  • displays formed as lenses to be placed on users’ eyes e.g., contact lenses
  • headphones/earphones input systems with or without haptic feedback (e.g., wearable or handheld controllers)
  • speaker arrays smartphones, tablets, and desktop/laptop computers.
  • a head mountable system can have one or more speaker(s) and an opaque display.
  • Other head mountable systems can be configured to accept an opaque external display (e.g., a smartphone).
  • the head mountable system can include one or more image sensors to capture images/video of the physical environment and/or one or more microphones to capture audio of the physical environment.
  • a head mountable system may have a transparent or translucent display, rather than an opaque display.
  • the transparent or translucent display can have a medium through which light is directed to a user’s eyes.
  • the display may utilize various display technologies, such as uLEDs, OLEDs, LEDs, liquid crystal on silicon, laser scanning light source, digital light projection, or combinations thereof.
  • An optical waveguide, an optical reflector, a hologram medium, an optical combiner, combinations thereof, or other similar technologies can be used for the medium.
  • the transparent or translucent display can be selectively controlled to become opaque.
  • Projection-based systems can utilize retinal projection technology that projects images onto users’ retinas. Projection systems can also project virtual objects into the physical environment (e.g., as a hologram or onto a physical surface).
  • a system comprising: a camera configured to capture images of an eye region of a user; a controller comprising one or more processors configured to: access an optical element description that describes optical properties of an optical element located on an optical path between the eye region of the user and the camera, wherein the optical element affects light on the optical path between the eye region of the user and the camera; receive one or more images of the eye region of the user from the camera; adjust the one or more images according to the optical properties of the optical element to account for effects of the optical element on quality of the one or more images; and perform biometric authentication for the user based at least in part on the adjusted one or more images.
  • Clause 3 The system as recited in clause 1, further comprising a memory that stores one or more different optical element descriptions for different optical elements, wherein the controller is configured to access the optical element description for the optical element from the memory upon detecting presence of the optical element on the optical path.
  • Clause 4 The system as recited in clause 1, wherein the optical element is a lens of an optical system in a device that includes the camera and the controller. Clause 5. The system as recited in clause 1, wherein the optical element is a lens added to an optical system in a device that includes the camera and the controller.
  • Clause 7 The system as recited in clause 1, wherein, to perform biometric authentication for the user based at least in part on the adjusted one or more images, the controller is configured to: analyze quality of one or more biometric aspects captured in the one or more images according to one or more objective criteria; select at least one biometric aspect according to the analysis; and perform the biometric authentication process based at least in part on the selected at least one biometric aspect.
  • Clause 8 The system as recited in clause 7, wherein the objective criteria include one or more of exposure, contrast, shadows, edges, undesirable streaks, occluding objects, sharpness, uniformity of illumination, and absence of undesired reflections.
  • biometric aspects include one or more of an eye surface, eye veins, eyelids, eyebrows, skin features, nose features, and iris features, wherein the iris features include one or more of colors, patterns, and musculature.
  • Clause 10 The system as recited in clause 1, further comprising an illumination source comprising a plurality of light-emitting elements configured to emit light towards the eye region to be imaged by the camera.
  • a method comprising: performing, by a controller comprising one or more processors: accessing an optical element description that describes optical properties of an optical element located on an optical path between an eye region of a user and a camera, wherein the optical element affects light on the optical path between the eye region of the user and the camera; receiving one or more images of the eye region of the user from the camera; adjusting the one or more images according to the optical properties of the optical element to account for effects of the optical element on quality of the one or more images; and performing biometric authentication for the user based at least in part on the adjusted one or more images.
  • Clause 16 The method as recited in clause 14, further comprising accessing the optical element description for the optical element from memory upon detecting presence of the optical element on the optical path, wherein the memory stores one or more different optical element descriptions for different optical elements.
  • Clause 17 The method as recited in clause 14, wherein the optical element is a lens of an optical system in a device that includes the camera and the controller.
  • Clause 18 The method as recited in clause 14, wherein the optical element is a lens added to an optical system in a device that includes the camera and the controller.
  • Clause 19 The method as recited in clause 14, wherein the optical properties of the optical element are related to an optical prescription for the user.
  • performing biometric authentication for the user based at least in part on the adjusted one or more images comprises: analyzing quality of one or more biometric aspects captured in the one or more images according to one or more objective criteria; selecting at least one biometric aspect according to the analysis; performing the biometric authentication process based at least in part on the selected at least one biometric aspect.
  • Clause 21 The method as recited in clause 20, wherein the objective criteria include one or more of exposure, contrast, shadows, edges, undesirable streaks, occluding objects, sharpness, uniformity of illumination, and absence of undesired reflections.
  • biometric aspects include one or more of an eye surface, eye veins, eyelids, eyebrows, skin features, nose features, and iris features, wherein the iris features include one or more of colors, patterns, and musculature.
  • Clause 23 The method as recited in clause 13, further comprising a plurality of lightemitting elements emitting light towards the eye region that is imaged by the camera.
  • Clause 24 The method as recited in clause 23, wherein the light-emitting elements are light-emitting diodes (LEDs).
  • Clause 25 The method as recited in clause 23, wherein the light-emitting elements are infrared (IR) light sources, and wherein the camera is an infrared camera.
  • IR infrared
  • a system comprising: a camera configured to capture images of an eye region of a user; an illumination source configured to emit light towards the eye region of the user to be imaged by the camera; an optical element located on an optical path between the eye region of the user and the camera, wherein the optical element is configured to diffract the light reflected off of the eye region of the user towards the camera, wherein diffracting the light improves viewing angle of the camera with respect to the eye region; and a controller comprising one or more processors configured to perform biometric authentication for the user based on one or more images of the eye region of the user captured by the camera.
  • Clause 29 The system as recited in clause 27, wherein the optical element is one of a prism, a lens, a waveguide, and a diffraction grating.
  • Clause 30 The system as recited in clause 27, wherein, to perform biometric authentication for the user based on one or more images of the eye region of the user captured by the camera, the controller is configured to: process the one or more images of the eye region captured by the camera to select one or more biometric aspects of the eye region; and perform the biometric authentication for the user based at least in part on the selected one or more biometric aspects.
  • Clause 31 The system as recited in clause 27, wherein the illumination source comprises a plurality of light-emitting elements configured to emit light towards the eye region to be imaged by the camera.
  • a method comprising: emitting, by an illumination source, light towards an eye region of a user to be imaged by a camera; diffracting, by an optical element located on an optical path between the eye region of the user and the camera, a portion of the light reflected off of the eye region of the user towards the camera, wherein diffracting the light improves viewing angle of the camera with respect to the eye region; and performing by a controller comprising one or more processors, biometric authentication for the user based on one or more images of the eye region of the user captured by the camera.
  • Clause 37 The method as recited in clause 35, wherein the optical element is one of a prism, a lens, a waveguide, and a diffraction grating.
  • Clause 38 The method as recited in clause 35, wherein performing biometric authentication for the user based on one or more images of the eye region of the user captured by the camera comprises: processing the one or more images of the eye region captured by the camera to select one or more biometric aspects of the eye region; and performing the biometric authentication for the user based at least in part on the selected one or more biometric aspects.
  • Clause 39 The method as recited in clause 35, wherein the illumination source comprises a plurality of light-emitting elements that emit the light towards the eye region to be imaged by the camera.

Abstract

Procédés et appareil d'authentification biométrique dans lesquels au moins deux caractéristiques ou aspects biométriques sont capturés et analysés individuellement ou en combinaison pour identifier et authentifier une personne. Un système d'imagerie capture des images de l'iris, de l'œil, d'une région périorbitaire d'une personne et/ou d'autres régions du visage de la personne, et au moins deux caractéristiques à partir des images capturées sont analysées individuellement ou en combinaison pour identifier et authentifier la personne et/ou pour détecter des tentatives de falsification de l'authentification biométrique. Des modes de réalisation peuvent améliorer les performances de systèmes d'authentification biométrique, et peuvent aider à réduire les faux positifs et les faux négatifs par les algorithmes d'authentification biométrique.
PCT/US2021/051615 2020-09-25 2021-09-22 Sélection automatique de biométrie sur la base de la qualité d'une image acquise WO2022066817A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202180078422.9A CN116472564A (zh) 2020-09-25 2021-09-22 基于所获取图像的质量自动选择生物识别
US18/027,916 US20230379564A1 (en) 2020-09-25 2021-09-22 Biometric authentication system
EP21794287.9A EP4217920A1 (fr) 2020-09-25 2021-09-22 Sélection automatique de biométrie sur la base de la qualité d'une image acquise

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202063083775P 2020-09-25 2020-09-25
US202063083767P 2020-09-25 2020-09-25
US63/083,775 2020-09-25
US63/083,767 2020-09-25

Publications (1)

Publication Number Publication Date
WO2022066817A1 true WO2022066817A1 (fr) 2022-03-31

Family

ID=78232387

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/051615 WO2022066817A1 (fr) 2020-09-25 2021-09-22 Sélection automatique de biométrie sur la base de la qualité d'une image acquise

Country Status (2)

Country Link
EP (1) EP4217920A1 (fr)
WO (1) WO2022066817A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100316263A1 (en) * 2009-06-15 2010-12-16 Honeywell International Inc. Iris and ocular recognition system using trace transforms
GB2471192A (en) * 2009-06-15 2010-12-22 Honeywell Int Inc Iris and Ocular Recognition using Trace Transforms
WO2017052807A1 (fr) * 2015-09-24 2017-03-30 Microsoft Technology Licensing, Llc Authentification d'utilisateur au moyen de multiples techniques de capture
US20170205875A1 (en) * 2016-01-19 2017-07-20 Magic Leap, Inc. Eye image collection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100316263A1 (en) * 2009-06-15 2010-12-16 Honeywell International Inc. Iris and ocular recognition system using trace transforms
GB2471192A (en) * 2009-06-15 2010-12-22 Honeywell Int Inc Iris and Ocular Recognition using Trace Transforms
WO2017052807A1 (fr) * 2015-09-24 2017-03-30 Microsoft Technology Licensing, Llc Authentification d'utilisateur au moyen de multiples techniques de capture
US20170205875A1 (en) * 2016-01-19 2017-07-20 Magic Leap, Inc. Eye image collection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SEONG-WHAN LEE; STAN Z. LI: "Proceedings of ICB: International Conference on Advances in Biometrics, Seould, Korea, 27 to 29-08-2007", vol. 4642, 27 August 2007, SPRINGER VERLAG, Heidelberg, Germany, ISBN: 978-3-540-74548-8, article EMINE KRICHEN ET AL.: "Color-Based Iris Verification", pages: 997 - 1005, XP047467703 *

Also Published As

Publication number Publication date
EP4217920A1 (fr) 2023-08-02

Similar Documents

Publication Publication Date Title
US11360557B2 (en) Eye tracking system
US11442537B2 (en) Glint-assisted gaze tracker
US10877556B2 (en) Eye tracking system
KR20180057668A (ko) 눈 추적 가능한 웨어러블 디바이스들
US20240053823A1 (en) Eye Tracking System
US9934583B2 (en) Expectation maximization to determine position of ambient glints
US20230367857A1 (en) Pose optimization in biometric authentication systems
US20230334909A1 (en) Multi-wavelength biometric imaging system
US20230377302A1 (en) Flexible illumination for imaging systems
US20230315201A1 (en) Stray light mitigation in optical systems
US20230379564A1 (en) Biometric authentication system
US20230377370A1 (en) Multi-camera biometric imaging system
CN112584127B (zh) 基于注视的曝光
WO2022066817A1 (fr) Sélection automatique de biométrie sur la base de la qualité d'une image acquise
US20230281846A1 (en) Specular surface mapping
US20240104958A1 (en) User Eye Model Match Detection
US20240105046A1 (en) Lens Distance Test for Head-Mounted Display Devices
WO2024064376A1 (fr) Détection de correspondance de modèle d'œil d'utilisateur

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21794287

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021794287

Country of ref document: EP

Effective date: 20230425

WWE Wipo information: entry into national phase

Ref document number: 202180078422.9

Country of ref document: CN