CN108780229B - Eye image acquisition, selection and combination - Google Patents

Eye image acquisition, selection and combination Download PDF

Info

Publication number
CN108780229B
CN108780229B CN201780018474.0A CN201780018474A CN108780229B CN 108780229 B CN108780229 B CN 108780229B CN 201780018474 A CN201780018474 A CN 201780018474A CN 108780229 B CN108780229 B CN 108780229B
Authority
CN
China
Prior art keywords
eye
image
eye image
pose
display system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201780018474.0A
Other languages
Chinese (zh)
Other versions
CN108780229A (en
Inventor
A·克勒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Magic Leap Inc
Original Assignee
Magic Leap Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Magic Leap Inc filed Critical Magic Leap Inc
Priority to CN202111528136.6A priority Critical patent/CN114205574A/en
Publication of CN108780229A publication Critical patent/CN108780229A/en
Application granted granted Critical
Publication of CN108780229B publication Critical patent/CN108780229B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/11Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils
    • A61B3/112Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for measuring interpupillary distance or diameter of pupils for measuring diameter of pupils
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • G06V10/235Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on user input or interaction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • G06V10/993Evaluation of the quality of the acquired pattern
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/349Multi-view displays for displaying three or more geometrical viewpoints without viewer tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

Systems and methods for eye image set selection, eye image acquisition, and eye image combination are described. Embodiments of systems and methods for eye image acquisition may include displaying graphics along a path connecting a plurality of eye pose regions. Eye images at a plurality of locations along the path may be obtained, and an iris code may be generated based at least in part on at least some of the obtained eye images.

Description

Eye image acquisition, selection and combination
Cross Reference to Related Applications
United states provisional application No.62/280,456 entitled "EYE IMAGE COLLECTION", filed 2016, month 1, 19, hereby incorporated by reference in the present application under 35u.s.c. § 119 (e); united states provisional application No.62/280,515 entitled "EYE IMAGE COMBINATION" filed on 19/1/2016; and priority of U.S. provisional application No.62/280,437 entitled "EYE IMAGE SET SELECTION of eye image sets", filed on 19/1/2016, the contents of each of which are incorporated herein by reference in their entirety.
Technical Field
The present disclosure relates to virtual reality and augmented reality imaging and visualization systems, and in particular to systems and methods for acquiring and processing eye images.
Background
Modern computing and display technologies have facilitated the development of systems for so-called "virtual reality" or "augmented reality" experiences, in which digitally rendered images or portions thereof are presented to a user in a manner that they look or feel real. Virtual reality "VR" scenes typically involve the presentation of digital or virtual image information, but are not transparent to other actual real-world visual inputs; augmented reality "AR" scenes typically involve the presentation of digital or virtual image information as an augmentation to the visualization of the real world around the user; or mixed reality "MR" scenarios, which typically involve merging the real world and virtual world to create a new environment where physical and virtual objects coexist and interact in real time. The human visual perception system has proven to be very complex and it has been challenging to generate VR, AR or MR techniques that facilitate comfortable, natural, rich presentation of virtual image elements among other virtual or real-world image elements. The systems and methods disclosed herein address various challenges associated with VR, AR, and MR technologies.
Disclosure of Invention
Examples of wearable display devices that can process eye images (such as selecting an eye image, capturing an eye image, and combining eye images) are described.
In one aspect, a method for eye image set selection is disclosed. The method is performed under control of a hardware computer processor. The method comprises the following steps: obtaining a plurality of eye images; for each eye image of the plurality of eye images, determining an image quality metric associated with each eye image and comparing each determined image quality metric to an image quality threshold to identify eye images that pass the image quality threshold, wherein the image quality threshold corresponds to an image quality level used to generate the iris code; selecting a set of eye images from a plurality of eye images, wherein each eye image passes an image quality threshold; and generating an iris code using the set of eye images. The head-mounted display system may include a processor that performs the method for eye image set selection.
In another aspect, a method for eye image acquisition is described. The method is performed under control of a hardware computer processor. The method includes displaying a graphic along a path connecting a plurality of eye pose regions; obtaining eye images at a plurality of locations along the path; and generating an iris code based at least in part on at least some of the obtained eye images. The head mounted display system may include a processor that performs the method for eye image acquisition.
In another aspect, a method for eye image combining is described. The method is performed under control of a hardware computer processor. The method comprises the following steps: accessing a plurality of eye images; and performing (1) an image fusion operation on the plurality of eye images, (2) an iris code fusion operation on the plurality of eye images, or both (1) and (2). The image fusion operation includes fusing at least some of the plurality of eye images to provide a blended image and generating a blended iris code from the blended image. The iris code fusion operation includes generating iris codes for at least some of the plurality of eye images and combining the generated iris codes to provide a hybrid iris code. The head mounted display system may include a processor that performs the method for eye image combining.
The details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims. Neither this summary nor the following detailed description is intended to define or limit the scope of the inventive subject matter.
Drawings
FIG. 1 depicts an illustration of an augmented reality scene with certain virtual reality objects and certain actual reality objects viewed by a person.
Fig. 2 schematically shows an example of a wearable display system.
FIG. 3 schematically illustrates aspects of a method of simulating a three-dimensional image using multiple depth planes.
Fig. 4 schematically shows an example of a waveguide stack for outputting image information to a user.
Fig. 5 shows an example exit beam that may be output by a waveguide.
FIG. 6 is a schematic diagram showing a display system for generating a multi-focal stereoscopic display, image or light field including a waveguide device, an optical coupler subsystem optically coupling light to or from the waveguide device, and a control subsystem.
FIG. 7 shows a flow diagram of an exemplary eye image set selection routine.
FIG. 8 schematically illustrates an example scene on a display of a head-mounted display system for eye image set acquisition.
Fig. 9 shows a flow diagram of an exemplary eye image acquisition routine.
Fig. 10 shows a flowchart of an exemplary eye image combination routine.
Throughout the drawings, reference numbers may be reused to indicate correspondence between reference elements. The drawings are provided to illustrate example embodiments described herein and are not intended to limit the scope of the present disclosure.
Detailed Description
SUMMARY
Eye image set selection example
Certain eye images obtained from one or more imaging sources, such as a camera, may be selected and used for various biometric applications. For example, after obtaining eye images, an image quality metric may be determined for some or all of the obtained eye images. The image quality metric may be determined based on, for example, an amount of blur, a number or percentage of pixels that are not occluded, color saturation, image resolution (such as the resolution of the region of interest), or any combination thereof. Different eye images may be associated with different types of image quality metrics. The determined image quality metric for each eye image may be compared to a respective image quality threshold.
A set of eye images may be selected, wherein each eye image in the set of eye images has an image quality metric that satisfies a corresponding image quality threshold. Additionally or alternatively, the selected set of eye images may include a fixed number of eye images (such as eye images with a top image quality metric). The selected set of eye images may be used for various biometric applications, such as eye pose determination (e.g., orientation of one or both eyes of the wearer) or iris code generation. For example, the selected eye image may be used to generate one or more iris codes.
Eye image acquisition example
Eye images for a plurality of eye pose regions may be obtained for various biometric applications. For example, a display (e.g., of a head-mounted display system) may be associated with a plurality of eye-pose regions (e.g., 2, 3, 4, 5, 6, 9, 12, 18, 24, 36, 49, 64, 128, 256, 1000 or more), and one or more eye images may be obtained for some or all of the eye-pose regions. The eye pose regions may be of the same or different size or shape (such as rectangular, square, circular, triangular, oval, diamond). The eye pose area can be considered as a two-dimensional real coordinate space
Figure BDA0001804649020000041
Or two-dimensional positive integer coordinate space
Figure BDA0001804649020000042
A connected subset of which specifies an eye pose area according to an angular space of eye poses of the wearer. For example, the eye pose area may be a particular θ in the azimuthal deflectionminWith a specific thetamaxA specific 0 in between (measured from the reference azimuth) and in zenith deflection (zenith deflection)minWith a specific CmaxBetween (also referred to as pole deflection).
Graphics (such as butterflies, bumblebees, or avatars) or animations of graphics may be displayed in or across two or more eye pose areas such that one or both eyes of a user of the display are directed or attracted to the eye pose areas. The graphics may be displayed in a random mode, a flight mode, a blinking mode, a wave mode, or a story mode in the eye-pose area or across two or more eye-pose areas. The speed of the moving pattern may be substantially constant or may be variable. For example, the graphic may slow down or stop in certain eye pose areas (e.g., where one or more eye images are taken), or the graphic may speed up or skip other eye pose areas (e.g., where eye images are not needed or desired). The path of the graphic may be continuous or discontinuous (e.g., the graphic 805 may skip over or around certain eye pose regions).
When a graphic is displayed in an eye-pose area, an eye image of a user associated with the eye-pose area may be obtained. After determining that an image quality metric (e.g., an amount of blur, or a number or percentage of unobstructed pixels) of the graphic passes or satisfies a corresponding image quality threshold, the graphic or animation of the graphic may be displayed in another eye pose region. The graphics displayed in the two eye pose areas may be the same or different. When the graphic is displayed in another eye-pose region, another eye image of the user associated with the other eye-pose region may be obtained. The image quality metric for the graph may be determined to pass or satisfy a corresponding image quality threshold. The image quality metric (or corresponding image quality threshold) may be the same or different for eye images obtained for different eye pose regions. This process may be repeated for other eye-pose regions of the display. For example, the graphic may move along a path from one eye pose region to another eye pose region.
If an eye image associated with a certain eye pose area does not pass or meets a corresponding image quality threshold, the graphic may be displayed in that particular area until an eye image of sufficient eye image quality is obtained. Alternatively or additionally, if an eye image cannot be obtained for a certain eye pose area after a threshold number of attempts (e.g., three), eye image acquisition may skip or pause acquisition for a period of time on that eye pose area while obtaining eye images from one or more other pose areas. If an eye image cannot be obtained after a threshold number of attempts, an eye image for a certain eye pose area may not be obtained. After obtaining eye images for a sufficient number of eye-pose regions or eye-pose regions of interest, one or more eye images may be used for various biometric applications (e.g., an iris code may be generated based on one or more of the obtained eye images).
Eye image combination example
The eye images obtained from the one or more imaging sources may be combined or fused into one or more hybrid eye images (also referred to as combined or fused eye images), which in turn may be used for biometric applications. For example, after obtaining the eye images, an eye pose may be identified for each eye image. The eye pose may be associated with a particular display classification, such as an eye pose area assignment for the display. One or both of image fusion or iris code fusion may be applied to the obtained eye image. For image fusion, some or all of the obtained eye images may be fused into a hybrid eye image using, for example, super resolution, spatial domain fusion, or transform domain fusion. An iris code may be extracted, generated, or determined from the mixed eye image. For iris code fusion, an iris code may be generated for each of some or all of the obtained eye images. The obtained iris codes may then be combined into a hybrid iris code using, for example, a media filter or a bayesian filter. Each iris code associated with a particular eye pose region may contribute to the entire hybrid iris code. A confidence score may be generated or determined for the iris code or the mixed iris code. The confidence score may be based on a score of the sampled eye pose region. One or both of the iris codes generated using image fusion or the hybrid iris codes generated using image fusion may be used for further utilization in one or more biometric applications.
Augmented reality scene examples
FIG. 1 depicts an illustration of an augmented reality scene with certain virtual reality objects and certain actual reality objects viewed by a person. Fig. 1 depicts an augmented reality scene 100 in which a user of AR technology sees a real-world park-like setting 110 featuring people, trees, buildings in the background, and a concrete platform 120. In addition to these items, the user of AR technology also perceives that he "sees" a robotic statue 130 standing on the real world platform 120, as well as a flying cartoon-style avatar character 140 (e.g., bumblebee) that appears to be the avatar of bumblebee, although these elements are not present in the real world.
In order for a three-dimensional (3D) display to produce a realistic perception of depth, and more particularly a simulated perception of surface depth, it is desirable for each point in the field of view of the display to generate an adaptive response corresponding to its virtual depth. If the accommodation response to a display point does not correspond to the virtual depth of the point (as determined by the binocular depth cues for convergence and stereo vision), the human eye may experience accommodation conflicts, resulting in unstable imaging, harmful ocular tension, headaches, and an almost complete lack of surface depth without accommodation information.
The VR, AR, and MR experiences may be provided by a display system having a display in which images corresponding to multiple depth planes are provided to a viewer. The images may be different for each depth plane (e.g., providing a slightly different rendering of a scene or object) and may be focused individually by the eyes of the viewer, thereby facilitating providing depth cues to the user based on eye accommodation required to focus different image features for scenes located on different depth planes and/or based on observing different image feature defocuses on different depth planes. As discussed elsewhere herein, such depth cues provide reliable depth perception. To generate or enhance VR, AR, and MR experiences, the display system may use biometric information to enhance these experiences.
Extracting biometric information from an eye typically includes a process for segmenting the iris within an image of the eye. Iris segmentation may involve the following operations: the operations include operations to locate iris boundaries, including finding pupil boundaries and edge boundaries of the iris, locating the upper or lower eyelid if they occlude the iris, detecting and excluding occlusions of eyelashes, shadows, or reflections, and so forth. For example, the eye image may be included in the face image, or may be an image of the periocular region. To perform iris segmentation, both the boundary of the pupil (the inner boundary of the iris) and the edge (the outer boundary of the iris) may be identified as separate segments of image data.
Furthermore, to obtain biometric information (e.g., eye pose), algorithms exist for tracking eye movement of a computer user. For example, a camera coupled to a monitor of a computer may provide images for recognizing eye movement. However, the camera used for eye tracking is at a distance from the user's eye. For example, the camera may be placed on top of a user monitor coupled to the computer. As a result, the image of the eye produced by the camera is typically produced with poor resolution. Thus, determining the user's eye pose may present challenges.
With the techniques disclosed herein, eye image processing may be used to substantially identify the pointing direction of the eye, and additionally or alternatively enhance the resolution of the eye image used for iris code generation. Embodiments of eye image processing described herein may be advantageously used to combine various eye pose images into a single eye image representing a portion of each eye pose image. Additionally, in some embodiments, eye image processing may utilize graphics to obtain eye images for various eye poses. Such obtained eye images may be analyzed to determine whether an image quality metric of the obtained eye images passes an image quality threshold. The image quality threshold may correspond to a value associated with generation of an iris code for an eye image. Thus, the obtained set of eye images may be selected for eye image processing, such as combining the eye pose images into a single eye pose image.
In the context of a wearable Head Mounted Display (HMD), a camera may be closer to a user's eye than a camera coupled to the user's monitor. For example, the camera may be mounted on a wearable HMD that is itself worn on the head of the user. The proximity of the eye to such a camera may result in a higher resolution eye image. Thus, the computer vision techniques may extract visual features from the user's eye, particularly the sclera (e.g., scleral features) at or around the iris (e.g., iris features). For example, the iris of an eye will show detailed structures when viewed by a camera near the eye. Such iris features are particularly pronounced when viewed under infrared illumination and are useful for biometric identification. These iris features are unique from user to user and, in the form of fingerprints, can be used to uniquely identify a user. The eye features may include blood vessels in the sclera (outside the iris) of the eye that may also appear particularly visible when viewed under red or infrared light. This unique iris feature viewed at higher resolution may result in a more unique or accurate iris code being generated for various eye pose images.
Can be used forWearing deviceExamples of wearable display systems
Fig. 2 shows an example of a wearable display system 200 that may be used to present a VR, AR, or MR experience to a display system wearer or viewer 204. Wearable display system 200 may be programmed to perform eye image processing to provide any of the applications or embodiments described herein. The display system 200 includes a display 208, and various mechanical and electronic modules and systems that support the functionality of the display 208. The display 208 may be coupled to a frame 212, the frame 212 being wearable by a display system user, wearer, or viewer 204 and configured to position the display 208 in front of the eyes of the wearer 204. The display 208 may be a light field display. In some embodiments, a speaker 216 is coupled to the frame 212 and positioned near the ear canal of the user. In some embodiments, another speaker, not shown, is positioned near another ear canal of the user to provide stereo/shapeable sound control. The display 208 is operatively coupled 220, such as by a wired lead or wireless connection, to a local data processing module 224, which local data processing module 224 may be mounted in various configurations, such as fixedly attached to the frame 212, fixedly attached to a helmet or hat worn by the user, embedded in headphones, or otherwise removably attached to the user 204 (e.g., in a backpack configuration, in a belt-coupled configuration).
The frame 212 may have one or more cameras attached or mounted to the frame 212 to obtain images of one or both eyes of the wearer. In one embodiment, the camera may be mounted to the frame 212 in front of the wearer's eyes so that the eyes may be directly imaged. In other embodiments, the camera may be mounted along a rod of the frame 212 (e.g., near the ear of the wearer). In such embodiments, the display 208 may be coated with a material that reflects light from the wearer's eye back to the camera. The light may be infrared light because iris features are prominent in the infrared image.
The local processing and data module 224 may include a hardware processor and a non-transitory digital memory such as a non-volatile memory (e.g., flash memory), both of which may be used to facilitate processing, caching, and storage of data. The data may include the following: a) data captured from sensors (which may, for example, be operatively coupled to the frame 212 or otherwise attached to the user 204), such as image capture devices (such as cameras), microphones, inertial measurement units, accelerometers, compasses, GPS units, radios, and/or gyroscopes; and/or b) data acquired and/or processed using the remote processing module 228 and/or the remote data repository 232, perhaps communicated to the display 208 after such processing or retrieval. The local processing and data module 224 may be operatively coupled to the remote processing module 228 and the remote data repository 232 by communication links 236 and/or 240 (such as via wired or wireless communication links) such that these remote modules 228, 232 are available as resources to the local processing and data module 224. The image capture device may be used to capture an eye image for use in an eye image processing procedure. Additionally, the remote processing module 228 and the remote data repository 232 may be operatively coupled to each other.
In some embodiments, the remote processing module 228 may include one or more processors configured to analyze and process data and/or image information, such as video information captured by an image capture device. The video data may be stored locally in the local processing and data module 224 and/or in the remote data repository 232. In some embodiments, the remote data repository 232 may include a digital data storage facility, which may be available in a "cloud" resource configuration through the internet or other network configuration. In some embodiments, all data is stored in the local processing and data module 224 and all calculations are performed, allowing for fully autonomous use from the remote module.
In some implementations, the local processing and data module 224 and/or the remote processing module 228 are programmed to perform embodiments of obtaining an eye image or processing an eye image as described herein. For example, the local processing and data module 224 and/or the remote processing module 228 may be programmed to execute embodiments of the routines 700, 900 or 1000 described with reference to fig. 7, 9 and 10, respectively. The local processing and data module 224 and/or the remote processing module 228 may be programmed to use the eye image processing techniques disclosed herein in biometric extraction, for example, to identify or authenticate the identity of the wearer 204, or in pose estimation, for example, to determine the direction each eye is looking in. The image capture device may capture video for a particular application (e.g., video of a wearer's eyes for an eye tracking application, or video of a hand or finger of a wearer for a gesture recognition application). The video may be analyzed using eye image processing techniques by one or both of the processing modules 224, 228. From this analysis, the processing modules 224, 228 may perform eye image selection, eye image acquisition, eye image combination, and/or bio-extraction, among others. As an example, the local processing and data module 224 and/or the remote processing module 228 may be programmed to acquire eye images from a camera attached to the frame 212 (e.g., routine 900). Additionally, the local processing and data module 224 and/or the remote processing module 228 may be programmed to process the eye images using an eye image set selection technique (e.g., routine 700) or an eye image combination technique described herein (e.g., routine 1000) to facilitate generating an iris code or identifying an eye pose of a wearer of the wearable display system 200. In some cases, offloading at least some iris code generation to a remote processing module (e.g., in the "cloud") may improve the efficiency or speed of computing. Or as another example, portions of the techniques may be offloaded to a remote processing module, such as merging of eye images.
The results of the video analysis (e.g., estimated eye pose) may be used by one or both of the processing modules 224, 228 for additional operations or processing. For example, in various applications, wearable display system 200 may use biometric recognition, eye tracking, recognition, or classification of objects, gestures, and the like. For example, a video of the wearer's eye may be used to obtain an eye image, which in turn may be used by the processing modules 224, 228 to generate an iris code of the eye of the wearer 204 via the display 208. The processing modules 224, 228 of the wearable display system 200 may be programmed with one or more embodiments of eye image processing to execute any of the video or image processing applications described herein.
The human visual system is complex and it is challenging to provide a realistic perception of depth. Without being limited by theory, it is believed that a viewer of an object may perceive the object as three-dimensional due to a combination of vergence and fitness. Vergence movement of the two eyes relative to each other (i.e., rolling movement of pupil holes toward or away from each other to converge the eye's line of sight to fixate on an object) is closely related to the focusing (or "accommodation") of the eye's lens. Under normal circumstances, changing the focus of the eye lens or adapting the eye to change focus from one object at a different distance to another will automatically result in a matching change in vergence to the same distance under a relationship called "accommodation-vergence reflex". Also, under normal circumstances, a change in vergence will cause a change in the adaptation. A display system that provides a better match between adaptability and vergence may result in a more realistic or comfortable three-dimensional image simulation.
FIG. 3 illustrates aspects of a method of simulating a three-dimensional image using multiple depth planes. Referring to fig. 3, objects at different distances from the eyes 302 and 304 on the z-axis are accommodated by the eyes 302 and 304 such that those objects are in focus. Eyes 302 and 304 exhibit a particular state of accommodation such that objects at different distances along the z-axis come into focus. Thus, it can be said that a particular accommodation state is associated with a particular one of the depth planes 306, which has an associated focal length, such that an object or part of an object in the particular depth plane is focused when the eye is in the accommodation state of that depth plane. In some embodiments, the three-dimensional image may be simulated by providing a different presentation of the image for each of the eyes 302 and 304, and also by providing a different presentation of the image corresponding to each of the depth planes. Although shown as separate for clarity of illustration, it is understood that the fields of view of eye 302 and eye 304 may overlap, for example, as the distance along the z-axis increases. Additionally, while shown as flat for ease of illustration, it is understood that the profile of the depth plane may be curved in physical space such that all features in the depth plane are in focus with the eye under certain accommodation conditions. Without being limited by theory, it is believed that the human eye can typically interpret a limited number of depth planes to provide depth perception. Thus, by providing the eye with different presentations of the image corresponding to each of these limited number of depth planes, a highly reliable simulation of perceived depth may be achieved.
Waveguide Stack Assembly example
Fig. 4 shows an example of a waveguide stack for outputting image information to a user. Display system 400 includes a waveguide stack or stacked waveguide assembly 405 that can be used to provide three-dimensional perception to eye 410 or the brain using a plurality of waveguides 420, 422, 424, 426, 428. In some embodiments, the display system 400 corresponds to the system 200 of fig. 2, and fig. 4 schematically illustrates some portions of the system 200 in more detail. For example, in some embodiments, the waveguide assembly 406 may be integrated into the display 208 of fig. 2.
With continued reference to fig. 4, the waveguide assembly 405 may also include a plurality of features 430, 432, 434, 436 between the waveguides. In some embodiments, the features 430, 432, 434, 436 may be lenses. In some embodiments, the features 430, 432, 434, 436 may not be lenses. But they may be spacers (e.g., cladding and/or structures for forming air gaps).
The waveguides 420, 422, 424, 426, 428 and/or the plurality of lenses 430, 432, 434, 436 may be configured to transmit image information to the eye at various levels of wavefront curvature or ray divergence. Each waveguide level may be associated with a particular depth plane and may be configured to output image information corresponding to that depth plane. Image injection devices 440, 442, 444, 446, 448 may be used to inject image information into waveguides 420, 422, 424, 426, 428, each of which may be configured to distribute incident light through each respective waveguide for output to eye 410. Light exits the output surfaces of the image injection devices 440, 442, 444, 446, 448 and is injected into the respective input edges of the waveguides 420, 422, 424, 426, 428. In some embodiments, a single beam of light (e.g., a collimated beam) may be injected into each waveguide such that a particular angle (and amount of divergence) corresponding to the depth plane associated with a particular waveguide outputs the entire field of the cloned collimated beam directed toward the eye 410.
In some embodiments, image injection devices 440, 442, 444, 446, 442 are discrete displays, each of which produces image information for injection into a respective waveguide 420, 422, 424, 426, 428, respectively. In some other embodiments, the image injection devices 440, 442, 446, 448 are outputs of a single multiplexed display that may convey image information to each of the image injection devices 440, 442, 444, 446, 448, for example, via one or more light pipes (such as fiber optic cables).
The controller 450 controls the operation of the stacked waveguide assembly 405 and the image injection devices 440, 442, 444, 446, 448. In some embodiments, controller 450 includes programming (e.g., instructions in a non-transitory computer-readable medium) that adjusts the timing and provision of image information to waveguides 420, 422, 424, 426, 428. In some embodiments, the controller 450 may be a single unitary device, or a distributed system connected by wired or wireless communication channels. In some embodiments, the controller 450 may be part of the processing module 224 or 228 (shown in fig. 2). In some embodiments, the controller may be in communication with an inward facing imaging system 452 (e.g., a digital camera), an outward facing imaging system 454 (e.g., a digital camera), and/or a user input device 466. An inward facing imaging system 452 (e.g., a digital camera) may be used to capture images of the eye 410, for example, to determine the size and/or orientation of the pupil of the eye 410. An outward facing imaging system 454 may be used to image a portion of the world 456. A user may enter commands into the controller 450 via the user input device 466 to interact with the display system 400.
Waveguides 420, 422, 424, 426, 428 may be configured to propagate light within each respective waveguide by Total Internal Reflection (TIR). The waveguides 420, 422, 424, 426, 428 may each be planar or have other shapes (e.g., curved), with top and bottom major surface and edges extending between the top and bottom major surfaces. In the illustrated configuration, waveguides 420, 422, 424, 426, 428 may each include light extraction optics 460, 462, 464, 466, 468 configured to extract light out of the waveguides by redirecting light propagating within each respective waveguide to output image information to eye 410. The extracted light may also be referred to as outcoupled light and the light extracting optical element may also be referred to as an outcoupled optical element. The extracted light beam is output by the waveguide at a location where light propagating in the waveguide strikes the light redirecting element. The light extraction optics (460, 462, 464, 466, 468) may be, for example, reflective and/or diffractive optical features. Although illustrated as being disposed at the bottom major surface of the waveguides 420, 422, 424, 426, 428 for ease of description and clarity of drawing, in some embodiments the light extraction optical elements 460, 462, 464, 466, 468 may be disposed at the top and/or bottom major surfaces and/or may be disposed directly in the volume of the waveguides 420, 422, 424, 426, 428. In some embodiments, the light extraction optical elements 460, 462, 464, 466, 468 may be formed in a layer of material attached to a transparent substrate to form the waveguides 420, 422, 424, 426, 428. In some other embodiments, the waveguides 420, 422, 424, 426, 428 may be a single piece of material, and the light extraction optical elements 460, 462, 464, 466, 468 may be formed on a surface of the piece of material and/or in an interior of the piece of material.
With continued reference to fig. 4, as discussed herein, each waveguide 420, 422, 424, 426, 428 is configured to output light to form an image corresponding to a particular depth plane. For example, the waveguide 420 closest to the eye may be configured to deliver collimated light injected into such waveguide 420 to the eye 410. The collimated light may represent an optically infinite focal plane. The next up waveguide 422 may be configured to emit collimated light that passes through a first lens 430 (e.g., a negative lens) before it can reach the eye 410. The first lens 430 may be configured to produce a slightly convex wavefront curvature such that the eye/brain interprets light from the next upstream waveguide 422 as coming from a first focal plane that is closer inward from optical infinity toward the eye 410. Similarly, third uplink waveguide 424 passes output light through first lens 452 and second lens 454 before reaching eye 410. The combined optical power of the first lens 430 and the second lens 432 may be configured to produce another incremental wavefront curvature such that the eye/brain interprets light from the third waveguide 424 as coming from a second focal plane that faces inward toward the person from optical infinity than light from the next ascending waveguide 422.
The other waveguide layers (e.g., waveguides 426, 428) and lenses (e.g., lenses 434, 436) are similarly configured, with the highest waveguide 428 in the stack sending its output through all of the lenses between it and the eye for aggregate (aggregate) power representing the focal plane closest to the person. To compensate for the stack of lenses 430, 432, 434, 436 when viewing/interpreting light from the world 456 on the other side of the stacked waveguide assembly 405, a compensating lens layer 438 may be disposed at the top of the stack to compensate for the aggregate power of the underlying lens stack 430, 432, 434, 436. This configuration provides as many sensing focal planes as there are waveguide/lens pairs available. The focusing aspects of the light extraction optics 460, 462, 464, 466, 468 and lenses 430, 432, 434, 436 of the waveguides 420, 422, 424, 426, 428 may be static (e.g., not dynamic or electro-active). In some alternative embodiments, either or both may be dynamic using electroactive features.
With continued reference to fig. 4, the light extraction optics 460, 462, 464, 466, 468 may be configured to redirect light out of their respective waveguides and output that light with an appropriate amount of divergence or collimation for the particular depth plane associated with the waveguide. As a result, waveguides having different associated depth planes may have different configurations of light extraction optical elements that output light having different amounts of divergence depending on the associated depth plane. In some embodiments, as discussed herein, the light extraction optical elements 460, 462, 464, 466, 468 may be volume or surface features that may be configured to output light at a particular angle. For example, the light extraction optical elements 460, 462, 464, 466, 468 may be volume holograms, surface holograms, and/or diffraction gratings. Light extraction optical elements such as diffraction gratings are described in U.S. patent publication No.2015/0178939, published on 25/6/2015, which is incorporated herein by reference in its entirety. In some embodiments, the features 430, 432, 434, 436, 438 may not be lenses. Rather, they may simply be spacers (e.g., cladding and/or structures for forming an air gap).
In some embodiments, the light extraction optical elements 460, 462, 464, 466, 468 are diffractive features that form a diffractive pattern or "diffractive optical element" (also referred to herein as a "DOE"). Preferably, the DOE has a relatively low diffraction efficiency, such that only a portion of the beam is deflected by each intersection of the DOE towards the eye 410, while the remainder continues to move through the waveguide via total internal reflection. The light carrying the image information is thus split into a plurality of related exit beams that exit the waveguide at a plurality of locations, and the result is a fairly uniform pattern of exit emissions towards the eye 410 for that particular collimated beam bouncing within the waveguide.
In some embodiments, one or more DOEs may be switchable between an "on" state in which they actively diffract and an "off" state in which they do not significantly diffract. For example, a switchable DOE may comprise a polymer dispersed liquid crystal layer, wherein the droplets comprise a diffraction pattern in the host medium, and the refractive index of the droplets may be switched to substantially match the refractive index of the host material (in which case the pattern DOEs not significantly diffract incident light), or the droplets may be switched to an index that DOEs not match the index of the host medium (in which case the pattern actively diffracts incident light).
In some embodiments, the number and distribution of depth planes and/or depth of field may be dynamically changed based on the pupil size and/or orientation of the viewer's eyes. In some embodiments, an inward facing imaging system 452 (e.g., a digital camera) may be used to capture images of the eye 410 to determine the size and/or orientation of the pupil of the eye 410. In some embodiments, inward facing imaging system 452 may be attached to frame 212 (as shown in fig. 2) and may be in electrical communication with processing module 224 and/or 228, which processing module 224 and/or 228 may process image information from inward facing imaging system 452 to determine, for example, a pupil diameter or orientation of an eye of user 204.
In some embodiments, the inward facing imaging system 452 (e.g., a digital camera) may observe user motion, such as eye motion and facial motion. The inward facing imaging system 452 may be used to capture images of the eye 410 to determine the size and/or orientation of the pupil of the eye 410. The inward facing imaging system 452 may be used to obtain images for determining a direction in which the user is looking (e.g., eye pose) or for biometric recognition of the user (e.g., via iris recognition). The images obtained by the inward facing imaging system 452 may be analyzed to determine the user's eye pose and/or mood, which may be used by the display system 400 to decide which audio or visual content should be presented to the user. The display system 400 may likewise determine head pose (e.g., head position or head orientation) using sensors such as Inertial Measurement Units (IMUs), accelerometers, gyroscopes, and the like. Head gestures may be used alone or in combination with eye gestures to interact with the dry track and/or render audio content.
In some embodiments, one camera may be utilized for each eye to separately determine the pupil size and/or orientation of each eye, thereby allowing image information to be presented to each eye to dynamically fit the eye. In some embodiments, at least one camera may be utilized for each eye to independently determine the pupil size and/or eye pose of each eye separately, thereby allowing image information to be presented to each eye to dynamically fit the eye. In some other embodiments, the pupil diameter and/or orientation of only a single eye 410 (e.g., using only a single camera per both eyes) is determined and assumed to be similar for both eyes of viewer 204.
For example, the depth of field may vary inversely with the pupil size of the viewer. Thus, as the size of the pupil of the eye of the viewer decreases, the depth of field increases, such that a plane that is not discernable because its position exceeds the depth of focus of the eye may become discernable and appear more focused as the pupil size decreases and the depth of field correspondingly increases. Similarly, the number of spaced apart depth planes used to present different images to a viewer may decrease as the pupil size decreases. For example, a viewer may not be able to clearly perceive details of the first and second depth planes at one pupil size without adjusting the accommodation of the eye away from one depth plane and to the other. However, the two depth planes may be sufficiently focused at the same time for a user at another pupil size without changing the adaptability.
In some embodiments, the display system may vary the number of waveguides that receive image information, based on a determination of pupil size and/or orientation or based on receiving an electrical signal indicative of a particular pupil size and/or orientation. For example, if the user's eye is unable to distinguish between two depth planes associated with two waveguides, the controller 450 may be configured or programmed to stop providing image information to one of the waveguides. Advantageously, this may reduce the processing burden on the system, thereby increasing the responsiveness of the system. In embodiments where the DOE of the waveguide is switchable between on and off states, the DOE may be switched to the off state when the waveguide DOEs receive image information.
In some embodiments, it may be desirable for the outgoing light beam to conform to a condition that the diameter is smaller than the diameter of the viewer's eye. However, meeting such conditions can be challenging given the variability of the pupil size of the viewer. In some embodiments, this condition is satisfied over a wide range of pupil sizes by varying the size of the emergent beam in response to a determination of the pupil size of the viewer. For example, as the pupil size decreases, the size of the exiting beam may also decrease. In some embodiments, an iris diaphragm may be used to vary the outgoing beam size.
The display system 400 may include an outward facing imaging system 454 (e.g., a digital camera) that images a portion of the world 456. This portion of the world 456 may be referred to as the field of view (FOV), and the imaging system 454 is sometimes referred to as a FOV camera. The entire area available FOR viewing or imaging by the viewer 204 may be referred to as the field of view (FOR). FOR may comprise 4 pi steradians of a solid angle around the display system 400. In some implementations of the display system 400, the FOR may include substantially all of the solid angle around the user of the display system 400 because the user 204 may move their head and eyes to view objects around the user 204 (located in front of, behind, above, below, or beside the user). Images obtained from the outward facing imaging system 454 may be used to track gestures made by the user (e.g., hand or finger gestures), detect objects in the world 456 in front of the user, and so forth.
The display system 400 may include a user input device 466 by which a user may input commands to the controller 450 to interact with the display system 400. For example, user input devices 466 may include a touch pad, touch screen, joystick, multiple degree of freedom (DOF) controller, capacitive sensing device, game controller, keyboard, mouse, directional pad (D-pad), wand, haptic device, totem (e.g., for use as a virtual user input device), and so forth. In some cases, a user may press or slide on a touch-sensitive input device using a finger (e.g., a thumb) to provide input to the display system 400 (e.g., provide user input to a user interface provided by the display system 400). The user input device 466 may be held by a user's hand during use of the display system 400. The user input device 466 may be in wired or wireless communication with the display system 400.
Fig. 5 shows an example of an outgoing light beam output by a waveguide. One waveguide is shown, but it should be understood that other waveguides in waveguide assembly 405 may function similarly, where waveguide assembly 405 includes a plurality of waveguides. Light 505 is injected into waveguide 420 at input edge 510 of waveguide 420 and propagates within waveguide 420 by Total Internal Reflection (TIR). At the point where light 505 impinges on a Diffractive Optical Element (DOE)460, a portion of the light exits the waveguide as an exit beam 515. The exit light beam 515 is shown as being substantially parallel, but depending on the depth plane associated with the waveguide 420, the exit light beam 515 may also be redirected at an angle (e.g., to form a diverging exit light beam) to propagate to the eye 410. It should be understood that the substantially parallel exit beams may be indicative of a waveguide having light extraction optics that outcouple the light to form an image that appears to be disposed on a depth plane at a large distance (e.g., optical infinity) from the eye 410. Other waveguides or other groups of light extraction optics may output a more divergent exit beam pattern, which would require the eye 410 to accommodate a closer distance to focus it on the retina and would be interpreted by the brain as light coming from a distance closer to the eye 410 than optical infinity.
Fig. 6 shows another example of a display system 400 that includes a waveguide device, an optical coupler subsystem that optically couples light to or from the waveguide device, and a control subsystem. The display system 400 may be used to generate a multi-focal stereo, image, or light field. Display system 400 may include one or more primary planar waveguides 604 (only one shown in fig. 6) and one or more DOEs 608 associated with each of at least some of primary waveguides 604. Planar waveguide 604 may be similar to waveguides 420, 422, 424, 426, 428 discussed with reference to fig. 4. The optical system may relay light along a first axis (the vertical or Y axis shown in fig. 6) using a distributed waveguide device and expand the effective exit pupil of the light along the first axis (e.g., the Y axis). The distributed waveguide apparatus may, for example, include a distributed planar waveguide 612 and at least one DOE616 (shown by dashed double-dotted lines) associated with the distributed planar waveguide 612. The distribution planar waveguide 612 may be similar or identical to, but have a different orientation than, the primary planar waveguide 604 in at least some respects. Similarly, at least one DOE616 may be similar or identical to DOE 608 in at least some respects. For example, distribution planar waveguide 612 and/or DOE616 may be composed of the same material as primary planar waveguide 604 and/or DOE 608, respectively. The optical system shown in fig. 6 may be integrated into the wearable display system 200 shown in fig. 2.
The relayed and exit pupil expanded light is optically coupled from the distribution waveguide arrangement into one or more primary planar waveguides 604. The primary planar waveguide 604 relays light along a second axis (e.g., the horizontal or X-axis in the view of fig. 6), which is preferably orthogonal to the first axis. Notably, the second axis may be a non-orthogonal axis to the first axis. The primary planar waveguide 604 expands the effective exit path of the light along the second axis (e.g., the X-axis). For example, the distribution planar waveguide 612 may relay and expand light along a vertical or Y-axis and deliver the light to the primary planar waveguide 604 which relays and expands the light along a horizontal or X-axis.
The display system 400 may include one or more colored light sources (e.g., red, green, and blue lasers) 620, which may be optically coupled into the proximal end of a single mode optical fiber 624. The distal end of the optical fiber 624 may be passed or received through a hollow tube 628 of piezoelectric material. The distal end protrudes from the tube 628 as an unsecured flexible cantilever 632. Piezoelectric tubes 628 may be associated with four quadrant electrodes (not shown). For example, the electrodes may be plated on the outside, outer surface or periphery or diameter of the tubes 628. A core electrode (not shown) is also located in the core, center, inner circumference, or inner diameter of the tube 628.
Drive electronics 636, electrically coupled via wires 640 for example, drive opposing pairs of electrodes to independently bend piezoelectric tubes 628 in two axes. The protruding distal tip of the optical fiber 624 has a mechanical resonance mode. The frequency of resonance may depend on the diameter, length, and material properties of the optical fiber 624. By vibrating the piezoelectric tube 628 near the first mechanical resonance mode of the fiber cantilever 632, the fiber cantilever 632 is made to vibrate and can sweep a large deflection.
By exciting resonances in two axes, the tip of the fiber cantilever 632 is scanned bi-axially throughout the area of the two-dimensional (2-D) scan. By modulating the intensity of the one or more light sources 620 in synchronization with the scanning of the fiber optic cantilever 632, the light exiting the fiber optic cantilever 632 forms an image. A description of such an arrangement is provided in U.S. patent publication No.2014/0003762, which is incorporated herein by reference in its entirety.
Component 644 of the optical coupler subsystem collimates the light exiting the scanning fiber cantilever 632. Collimated light is reflected by a mirror 648 into a narrow distribution planar waveguide 612 that contains at least one Diffractive Optical Element (DOE) 616. The collimated light propagates vertically (relative to the view of fig. 6) along the distribution planar waveguide 612 by total internal reflection and repeatedly intersects with the DOE 616. The DOE616 preferably has low diffraction efficiency. This results in a portion of the light (e.g., 10%) being diffracted towards the edge of the larger primary planar waveguide 604 at each intersection with DOE616, and a portion of the light continues by distributing the length of the planar waveguide 612 down its original trajectory by TIR.
At each intersection with DOE616, the additional light is diffracted towards the entrance of primary waveguide 612. By splitting the incident light into multiple outcoupling groups, the exit pupil of the light is vertically expanded in the distribution plane waveguide 612 by the DOE 616. The vertically expanded light outcoupled from the distribution planar waveguide 612 enters the edge of the primary planar waveguide 604.
Light entering the primary waveguide 604 propagates horizontally (relative to the view of fig. 6) along the primary waveguide 604 via TIR. As light propagates horizontally along at least a portion of the length of primary waveguide 604 by TIR, the light intersects DOE 608 at multiple points. The DOE 608 may advantageously be designed or constructed to have a phase profile that is the sum of a linear diffraction pattern and a radially symmetric diffraction pattern to produce deflection and focusing of light. The DOE 608 may advantageously have a low diffraction efficiency (e.g., 10%) such that only a portion of the light of the beam is deflected towards the eye of view at each intersection of the DOE 608, while the rest of the light continues to propagate through the waveguide 604 via TIR.
At each intersection between the propagating light and DOE 608, a portion of the light is diffracted towards the adjacent face of primary waveguide 604, allowing the light to escape TIR and exit the face of primary waveguide 604. In some embodiments, the radially symmetric diffraction pattern of the DOE 608 additionally imparts a level of focus to the diffracted light, both shaping the optical wavefront of the individual beams (e.g., imparting curvature) and steering the beams at an angle that matches the designed level of focus.
These different paths may therefore couple light out of the main planar waveguide 604 by multiple DOEs 608 at different angles, focus levels, and/or creating different fill patterns at the exit pupil. Different fill patterns at the exit pupil may be advantageously used to create a light field display having multiple depth planes. Each layer in the waveguide assembly or a set of layers (e.g., 3 layers) in the stack can be used to produce a respective color (e.g., red, blue, green). Thus, for example, a first set of three adjacent layers may be employed to produce red, blue, and green light, respectively, at a first focal depth. A second set of three adjacent layers may be employed to produce red, blue, and green light, respectively, at a second focal depth. Multiple sets can be employed to produce full 3D or 4D color image light fields with various depths of focus.
Eye image set selection example
A camera, such as inward facing imaging system 452 (see, e.g., fig. 4), may be used to image an eye of a wearer of a Head Mounted Display (HMD) (e.g., wearable display system 200 shown in fig. 2 or display systems 400 in fig. 4 and 6). An eye image set selection technique may be used to select certain eye images obtained from the camera. The selected set of eye images may be used for various biometric applications. For example, in some implementations, images captured by the inward-facing imaging system 452 can be used to determine eye pose (e.g., orientation of one or both eyes of the wearer) or to generate or determine iris codes.
The local processing and data module 224 and/or the remote data repository 232 in fig. 2 may store image files, video files, or image audio and video files. For example, in various embodiments, the data module 224 and/or the remote data repository 232 may store a plurality of eye images to be processed by the local processing and data module 224. The local processing and data module 224 and/or the remote processing module 228 may be programmed to use the eye image set selection techniques disclosed herein in biometric extraction, for example, to identify or authenticate the identity of the wearer 204, or in pose estimation, for example, to determine the direction each eye is looking in. For example, the processing modules 224, 228 may be caused to perform aspects of eye image set selection. Additionally or alternatively, the controller 450 in fig. 4 may be programmed so as to perform aspects of eye image set selection.
Referring to fig. 2, an image capture device may capture video for a particular application (e.g., video of a wearer's eyes for an eye tracking application or video of a wearer's hand or fingers for a gesture recognition application). The video may be analyzed by one or both of the processing modules 224, 228 using an eye image set selection technique. From this analysis, the processing modules 224, 228 may perform eye gesture recognition or detection and/or biometric extraction, etc. For example, the local processing and data module 224 and/or the remote processing module 228 may be programmed to store eye images obtained from a camera attached to the frame 212. Further, the local processing and data module 224 and/or the remote processing module 228 may be programmed to process the eye images using the techniques described herein (e.g., routine 700) to select a set of eye images for a wearer of the wearable display system 200. In some cases, offloading at least some of the eye image set selections to a remote processing module (e.g., in the "cloud") may improve the efficiency or speed of the computation. Such eye image set selection may facilitate removal of focus errors in the eye image, illumination effects present in the eye image, or any other image distortions present in the eye image. To facilitate removal of such distortions, a quantitative representation of the iris in the eye image may be used as a measure of the quality of the eye image. For example, the iris code may be associated with a quality metric of the eye image.
In general, the iris of an eye (e.g., as obtained in an eye image) may be mapped (e.g., "unfolded") to a polar coordinate representation system having a radial coordinate r and an angular coordinate θ. This representation in the polar coordinate system of the iris region may be referred to as the iris code of the portion of the eye image. Or in another embodiment, the iris may first be segmented to have two angular dimensions mapped to a polar coordinate representation system. Thus, in either embodiment, the iris code may be extracted, generated, determined, or calculated from the image. As an example of an iris code of an iris in a polar coordinate system, a shift of an eye feature may be measured in pixels, which may be converted into a measure of angular coordinates, e.g. in degrees.
The iris code may be calculated in various ways. For example, in some embodiments, the iris code may be generated according to an algorithm developed by John Daugman for iris biometrics (see, e.g., U.S. Pat. No.5,291,560). For example, the iris code may be based on a convolution of the iris image (in polar coordinates) with a two-dimensional band-pass filter (e.g., a gabor filter), and the iris code may be represented as a two-bit (bit) number (e.g., whether the response of a particular gabor filter is positive or negative).
The iris code may reflect the image quality of the eye image. For example, from a probabilistic perspective, when a higher quality image is used to generate the iris code, the iris code may have fewer errors in the coded bits. Thus, it may be desirable to obtain an eye image having an image quality that passes a particular image quality threshold. Various image quality metrics may be used to assess the quality of the eye image. For example, an eye image may have various quality factors associated with the image, including but not limited to: resolution (e.g., iris resolution), focus, defocus, sharpness, blur, unobstructed or obstructed pixels (e.g., obstructed by eyelashes or eyelids), glare, glint (e.g., corneal reflections), noise, dynamic range, hue reproduction, brightness, contrast (e.g., gamma), color accuracy, color saturation, whiteness, distortion, vignetting, exposure accuracy, lateral chromatic aberration, lens flare, artifacts (e.g., software processing artifacts such as during RAW conversion), and color moire.
Each of the quality factors may have a quality metric associated with a metric of the quality factor. Thus, a relationship between a certain quality metric and the number of errors in the iris code may be determined (e.g., by calibration using a standard eye image). For example, an image with less blur (e.g., an eye that moves less relative to a reference eye image when captured) may have a smaller number of errors in the corresponding iris code of the image, indicating a higher quality factor for blur. As another example, the amount of unobstructed pixels in an image may correspond in proportion to the number of errors in the corresponding iris code of the image (e.g., a greater number of unobstructed pixels may result in a proportionally lower number of errors in the corresponding iris code). Furthermore, when the user blinks or moves away from the camera, the amount of unobstructed pixels may decrease, resulting in a lower quality factor for unobstructed pixels. The amount of pixels that are occluded (or not occluded) can be quantified as a number or percentage of pixels, the area of the image that is occluded (or not occluded), and so forth.
As these examples illustrate, any eye image may be used to compute an image quality metric (e.g., a real-valued value) q that reflects the quality of the eye image. In many cases, for higher quality images, q is higher (e.g., the q of the unobstructed pixels may increase as the amount of unobstructed pixels increases), and high quality images include those images having a q value that passes (increases above) a quality threshold. In other cases, for higher quality images, q is lower (e.g., the q of occluded pixels may decrease as the amount of occluded pixels decreases), and high quality images include those images having a q value that passes the quality threshold (decreases below the quality threshold).
In some implementations, the quality metric of the eye image may be a combination of multiple component (component) quality metrics computed for the image. For example, the quality metric of the eye image may be a weighted sum of various component quality metrics. Such quality metrics may advantageously quantify different types of image quality (e.g., amount of unobstructed pixels, resolution, and focus) as a single overall measure of image quality.
In some cases, perspective correction may be applied to the eye image (e.g., to reduce the perspective effect between the imaging camera and the eye). For example, the eye image may be perspective corrected so that the eye appears to be looking straight instead of looking from an angle. In some cases, perspective correction may improve the quality of the eye image. In some embodiments, a quality metric may be calculated from the perspective corrected eye image.
A quality metric associated with the eye image may be calculated or processed in the processing module 204, 228. For example, in some implementations, the processing modules 224, 228 may determine an image quality metric associated with the obtained eye image. Additionally, various processing techniques associated with the eye images and a corresponding plurality of quality metrics for each eye image may be accomplished in the processing modules 224, 228. For example, each determined quality metric may be compared to an image quality threshold Q. The image quality threshold Q may be associated with a particular quality level of a particular quality metric. As just one example, the resolution (e.g., quality metric) of an eye image may be expressed in terms of the resolution of the iris, where the resolution of the iris is expressed as a distance in pixels. In many applications, the radial resolution of the iris is greater than about 70 pixels and may be in the range of 80 to 200 pixels in order to capture iris details. For example, for the radius of the iris, the image quality threshold may be 130 pixels.
Illustratively, the obtained eye image with an iris radius of 110 pixels may be compared to an image quality threshold with an iris radius of 130 pixels. Such an image will pass a threshold and is therefore not selected as part of the set of eye images to be used in further processing. However, if the iris radius of the obtained eye image is 150 pixels, the obtained eye image may be selected as part of the image set for further eye image processing. For example, the obtained eye image may be used to generate an iris code. In other embodiments, the image quality metric may be the percentage of the iris visible between the eyelids. For example, a percentage below 50% may indicate that an eye is blinking when an image of the eye is captured. In some embodiments, an image may be selected if the image quality metric passes an image quality threshold represented as a percentage of 60%, 70%, 75%, 80%, 90%, or higher.
As can be seen from these examples, the image quality threshold may correlate the image quality of the obtained eye image with the subsequent generation of the iris code: the obtained eye image that passes the image quality threshold may be selected as part of the image set used to generate the iris code, while the obtained eye image that passes the image quality threshold may not be selected. As described further below in fig. 7, routine 700 depicts an exemplary workflow for processing such eye images to determine whether they pass an image quality threshold and whether such images are utilized in the generation of iris codes.
Although the foregoing example has been described as comparing the quality metric Q to a particular image quality threshold Q, this is for illustration only and is not intended to be limiting. In other embodiments, any threshold comparison may be used in selecting the set of eye images. For example, the selected set of eye images may be a fixed fraction p of the image i for which the quality Q isiIn the top fraction p of the size, where p may be, for example, 1%, 5%, 10%, 15%, 20%, 25%, 33%, or 50%. As another example, the selected eye image set may be a fixed number of images n with the highest score QiWherein n may be 1, 2, 3, 4,5. 10 or more. In some cases, only a single optimal quality image is used (e.g., n-1). The image quality threshold may represent a level (e.g., A, B, C, D or F), and images above the threshold level (e.g., B) may be used for analysis.
When eye images are obtained in real-time from an image capture device (e.g., inward facing imaging system 452), the selected set of eye images can be buffered into a memory buffer. For example, in one buffering implementation, the quality metric associated with each eye image in the buffer may be compared to additional eye images to be added to the buffer having the same or similar quality metric associated therewith. The quality metric of the additional eye image may be compared to the quality metric of the eye images in the buffer to determine whether the additional eye image should be added to the buffer or replace one of the previously buffered eye images. For example, if the quality metric associated with the additional eye image passes the quality metric associated with one of the buffered images having a lower quality metric, the additional eye image may replace one of the buffered images.
As an example in terms of iris radius being a quality metric, the buffered eye image may contain an eye image having an iris radius between 132 pixels and 150 pixels. These buffered eye images may be the "preferred" eye image followed by an additional eye image having an iris radius better than 132 pixels. In the case where the additional eye image has an iris radius of 145 pixels, the additional eye image may replace one of the images having an iris radius of 132 pixels. Thus, a "preferred" eye image may be maintained in the buffer used to generate the iris code. Although the foregoing example has been described in the context of buffering a "preferred" eye image set in a buffer, this is for illustration only and not limiting. In other embodiments, any suitable buffering scheme may be used in buffering the eye images.
As described further below in fig. 7, routine 700 depicts an exemplary workflow for processing such eye images to determine whether such eye images pass an image quality threshold and whether such images are utilized in the generation of iris codes.
In some cases, the eye image may not pass the image quality threshold; and subsequent eye images may not pass the same image quality threshold. Thus, in some implementations, the processing modules 224, 228 may implement an eye image capture routine using graphics presented to the wearer 204 to obtain an image that passes an image quality threshold. For example, the wearer 204 may be directed to look toward the graphic while taking an image of the wearer's eyes. The figure can be moved in order to obtain eye images at different eye poses. Such a routine may obtain an eye image that may be used to generate an iris code. As described further below, various such eye image acquisition routines may be used to obtain or acquire an eye image used to generate an iris code.
Example eye image set selection routine
FIG. 7 is a flow diagram of an exemplary eye image set selection routine. Routine 700 depicts a workflow example for processing eye images to determine whether they pass an image quality threshold and whether such images are utilized in the generation of iris codes.
At block 704, one or more eye images are obtained. Eye images may be obtained from a variety of sources including, but not limited to: an image capture device, a head-mounted display system, a server, a non-transitory computer-readable medium, or a client computing device (e.g., a smartphone).
Continuing with the routine 700, at block 708, an image quality metric is determined for at least some of the obtained eye images. In various embodiments, an image quality metric may be determined for each eye image according to various image quality metrics described herein with respect to examples of eye image set selection. For example, the resolution (e.g., quality metric) of an eye image may be represented by the resolution of the iris, which is expressed as a distance in pixels.
At block 712, the determined image quality metric for each eye image is compared to a corresponding image quality threshold. For example, if each eye image uses a certain amount of blur as the quality metric, the blur of each eye image may be compared to a blur quality threshold. Alternatively, some eye images may use blur while other eye images may use another image quality metric (e.g., color saturation, number, percentage, or area of unobstructed pixels, etc.). In this case, the comparison may be performed using an image quality threshold for the respective quality metric.
At block 716, a set of eye images is selected that have corresponding image quality metrics that meet or pass an image quality threshold. Among the various types of image quality metrics, the better image has the greater quality metric and passes the threshold, and the image quality metric increases above the threshold. Among other types of image quality metrics, a better image has a smaller quality metric (e.g., a metric that quantifies image defects), and to pass a threshold, the image quality metric drops below the threshold. The set of eye images may be used for various biometric applications where it has been determined that the set of eye images passes certain image quality thresholds. Accordingly, at block 720, one or more iris codes are generated using the selected set of eye images. For example, an iris code may be generated according to the methods described herein (see, e.g., U.S. patent No.5,291,560).
In various embodiments, the routine 700 may be performed by a hardware processor (e.g., the processing modules 224, 228 or the controller 450) of a display system, such as an embodiment of the display system 200. In other embodiments, a remote computing device having computer-executable instructions may cause the head mounted display system to perform aspects of the routine 700. For example, the remote computing device may be caused to determine an image quality metric, or the remote computing device may be caused to select a set of eye images having an image quality metric that passes an image quality threshold.
Eye image acquisition example
The head mounted display system may display graphics or images to the display system wearer 204 to capture or obtain eye images processed by the processing modules 224, 228. For example, a user (such as the wearer 204) of the wearable display system 200 shown in fig. 2 or the display system 400 in fig. 4 and 6 may view graphics or images on the display 208 of the wearable display system 200 or the display system 400. Graphics (such as butterflies or bumblebees, or avatars, that appear realistic or animated) may be displayed in various eye-pose regions of the display 208 until an eye image of sufficient eye image quality is obtained for one or more eye-pose regions of the display 208. For example, the quality of an eye image may be determined and compared to an image quality threshold to determine that the eye image has an image quality that is usable for biometric applications (e.g., generation of iris codes). If an eye image in a certain eye pose region does not pass or meets an image quality threshold, display 208 may be configured to continue displaying one or more graphics in that particular region until an eye image of sufficient eye image quality is obtained. The graphic or graphics displayed in a particular area may be the same or different in different embodiments. For example, the graphics may be displayed at the same or different locations or in the same or different orientations in the particular area.
Graphics may be displayed in various eye-pose regions of display 208 using a story mode or a mode that may direct or attract one or both eyes of the wearer toward different regions of display 208. For example, in one embodiment described below with reference to FIG. 8, a butterfly may be shown moving across various regions of display 208. The examples of graphics displayed in various regions of display 208 may have characteristics (e.g., different depths, colors, or sizes) that attract or guide one or both eyes of the wearer toward one or more eye-pose regions in which the example of graphics is displayed. In some embodiments, the graphics displayed in various regions of the display 208 may appear to have varying depths such that one or both eyes of the wearer are drawn toward the eye-pose region in which the instance of the graphics is displayed.
Fig. 8 schematically illustrates an example scene 800 on the display 208 of the head mounted display system. As shown in fig. 8, the display 208 may display a scene 800 with moving graphics 805. For example, as shown, the graphic 805 may be a butterfly that is displayed to the user as the entire scene 800 flies. The graphic 805 may be displayed on or as part of a background image or scene (not shown in fig. 8). In various embodiments, the graphic may be an avatar (e.g., an anthropomorphic representation of a person, animal, or thing, such as the butterfly or bumblebee 140 shown in fig. 1), or any other image or animation that may be configured to be displayed in a particular eye-pose region of the display 208. The graph 805 may be customized for the user (e.g., based on age, anxiety level, maturity, interest, etc.). For example, to avoid causing anxiety in children, the graphic 805 may be a child-friendly character (e.g., butterfly or friendly bumble bee 140). As another example, for a user who is a car enthusiast, the graphic 805 may be a car such as a racing car. Thus, as moving in various regions of display 208, graphic 805 may be displayed as and appear to video animation to wearer 204 using wearable display system 200. The graphic 805 may begin in an initial position 810a and proceed along a path 815 to a final position 810 b. For example, as shown, the graphic 805 may move across the display (e.g., along the dashed line) in a clockwise manner into a different area of the display 208. As another example, the graphic 805 may appear to be a zigzag or random movement across different areas of the display 208. One possible zigzag pattern may be regions 820r1, 820r2, 820r4, 820r0, 820r3, 820r5, 820r7, and 820r 8.
For example only, the display 208 is shown in FIG. 8 as having nine regions 820r0-820r8 of the same size. The number of regions 820r0-820r8 of display 208 may vary in different embodiments. Any number of regions of the display may be used to capture the eye image while graphics proceed from one region to another to direct the eye toward the corresponding region. For example, the number of eye pose areas may be 2, 3, 4, 5, 6, 9, 12, 18, 24, 36, 49, 64, 128, 256, 1000 or more. An eye image may be captured for some or all of the eye pose areas. The shape of regions 820r0-820r8 of display 208 may be different in different embodiments, such as rectangular, square, circular, triangular, oval, diamond. In some embodiments, different areas of display 208 may differ in size. For example, the area closer to the center of the display 208 may be smaller or larger than the area further from the center of the display 208. As another example, the eye-pose area may comprise a half, quadrant, or any segment of the display 208.
The path 815 may move in, across, or around an eye-pose region where high quality eye images are desired, and the path 815 may avoid eye-pose regions where eye images are undesirable (e.g., typically of poor quality) or not needed (e.g., for particular biometric applications). For example, biometric applications (e.g., iris code generation) may prefer to use eye images in which the user's eye is directly-viewed (e.g., the direct-view eye-pose region 820r 0). In this case, the graphic 805 may tend to move primarily within the eye pose region 820r0 and not move (or move less frequently) within the eye pose regions 820r1-820r 8. The path 815 may be more concentrated at the center of the scene 800 than at the perimeter regions of the scene 800. In other biometric applications (e.g., diagnosis of the retina of an eye), it may be desirable to obtain an image of the eye that: where the user is facing away from region 820r0 (e.g., away from the natural resting eye pose) in order to obtain an image of the medial or lateral region of the retina (away from the fovea). In such applications, the graphic 805 may tend to move around the perimeter of the scene 800 (e.g., regions 820r1-820r8) as compared to the center of the scene (e.g., region 820r 0). The path 815 may be more concentrated around the perimeter of the scene and tend to avoid the center of the scene (e.g., similar to the path 815 shown in fig. 8).
For example only, eye pose regions 820r0-820r8 of display 208 are depicted as being separated by horizontal and vertical dashed lines in display 20. Such eye pose regions 820r0-820r8 are outlined for ease of description and may represent the region of the display 208 at which the wearer's eyes should be directed so that eye images may be obtained. In some embodiments, the horizontal and vertical dashed lines shown in fig. 8 are not visible to the user. In some embodiments, the horizontal or dashed lines shown in fig. 8 are visible to the user to direct one or more eyes of the wearer toward a particular area of the display 208.
The path 815 shown in fig. 8 is exemplary and not intended to be limiting. The path 815 may have a different shape than that shown in fig. 8. For example, path 815 may intersect, re-intersect, or avoid one or more eye pose regions 80r0-820r1, and may be a straight line, polygon, curve, or the like. The speed of moving graphic 815 may be substantially constant or variable. For example, the graph 805 may slow down or stop in certain eye pose regions (e.g., eye pose regions in which one or more eye images are taken), or the graph 805 may speed up or skip other eye pose regions (e.g., eye pose regions in which eye images are not needed or desired). The path 815 may be continuous or discontinuous (e.g., the graphic 805 may skip or surround certain eye pose regions). For example, referring to fig. 8, if the graphic 805 is at location 810b in the eye-pose region 820r4, and the biometric application requires an eye image in which the user's eye is directed toward the eye-pose region 820r8, the display system may display the graphic 805 such that it continuously moves to region 820r8 (e.g., a butterfly crosses the scene from region 820r4, passes through region 820r0, and flies into region 820r8), or the display system may simply stop displaying the graphic 805 in region 820r4, and then begin displaying the graphic 805 in region 820r8 (e.g., the butterfly will appear to have jumped from region 820r4 to 820r 8).
The eye pose area can be considered as a true two-dimensional coordinate space
Figure BDA0001804649020000301
Or a positive integer two-dimensional coordinate space
Figure BDA0001804649020000302
The connected subset of (a) that specifies the eye pose area according to the angular space of the wearer's eye pose. For example, in one embodiment, the eye pose area may be a particular θ in the azimuthal deflectionminWith a specific thetamaxSpecific phi between and in zenith deflectionminWith a specific phimaxIn the meantime. Additionally, the eye pose area may be associated with a specific area assignment. Such area assignments may not appear to wearer 204 on display 208, but are shown in fig. 8 for purposes of example. The regions may be allocated in any suitable manner. For example, as depicted in fig. 8, the central region may be the allocation region 820r 0. In the depicted embodiment, the numbering of the regions may occur in a generally horizontal order, with the center region being allocation region 820r0, ending with the lower right region being allocation region 820r 8. These regions 820r0-820r8 may be referred to as eye pose regions. In other embodiments, the regions may be numbered or referenced differently than shown in FIG. 8. For example, the upper left region may be allocation region 820r0, and the lower right region may be allocation region 820r 8.
Scene 800 may be presented by a wearable display system in a VR display mode, where wearer 204 sees graphics 805, but not the outside world. Alternatively, the scene 800 may be presented in an AR or MR display mode, where the wearer 204 sees a visual graphic 805 superimposed on the outside world. When graphic 805 is being displayed in an eye pose area, an eye image may be captured by an image capture device (e.g., inward facing imaging system 452 in fig. 4) coupled to wearable display system 200. As just one example, one or more eye images may be captured in one or more of eye pose regions 820r0-820r8 of display 208. For example, as shown, graphic 805 may start in initial position 810a and move within the upper left eye pose area (e.g., area 820r1) of display 208. As graphic 805 moves in the upper left eye pose area, wearer 204 may direct their eyes toward that area of display 208. When graphic 805 is located in the upper left eye pose area of display 208, the one or more eye images captured by the camera may include eyes at some eye pose when looking in that direction.
Continuing with the example, graphic 805 may move along path 815 to an upper-middle eye pose region (e.g., region 820r2) where an eye image having an eye pose pointing to the upper-middle region may be captured. The graphic 805 may be moved around in the various eye pose regions 820r0-820r8 of the display 208 while capturing eye images intermittently or continuously during the process until the graphic 805 reaches a final position 810b in the region 820r 4. One or more eye images may be captured for each region, or eye images may be captured in fewer than all regions through which the graphic 805 moves. Thus, the captured eye image may comprise at least one image of the eye in one or more different eye poses. As will be described further below, the eye pose may be represented as a representation of two angles.
The graphic 805 may also remain in an eye-pose area of the display 208 until an image of some image quality is obtained or captured. As described herein, various image quality metrics may be used to determine whether an eye image passes an image quality threshold (Q). For example, the image quality threshold may be a threshold corresponding to an image metric level used to generate the iris code. Thus, if an eye image captured while graph 805 is in a certain eye pose region of display 208 passes an image quality threshold, graph 805 may remain in the eye pose region (or return to the eye pose region) until an image is obtained that meets or passes the image quality threshold. An image quality threshold may also be defined for a particular eye pose area of the display. For example, certain biometric applications may require darkening certain areas of the display 208. Thus, the image quality threshold for those regions may be higher than the image quality threshold for regions that have not yet been darkened. During this image capture process, graphic 805 may continue in a story mode or animation for continuing to direct the wearer's eyes toward the area.
The eye image acquisition routine may also be used to correct weak bits (fragile bits) in the iris code. Fragile bits refer to bits of the iris code that are not consistent between eye images (e.g., there is a high probability that a bit is zero for some eye images and one for other images of the same iris). More specifically, the weak bits may be weak limit bits in the iris code of the eye image, which may represent empirical unreliability in the measurement. For example, weak bits can be quantified using a bayesian model to solve for uncertainty in the parameters of the bernoulli distribution. For example, fragile bits may also be identified as those bits representing areas that are typically covered by eyelids or obscured by eyelashes. The eye image acquisition routine may utilize the graph 805 to actively guide the eye to different eye poses, thereby reducing the impact of the weak bits on the resulting iris code. As just one example, the graphic 805 may direct the eye to an eye pose area that is not obscured by eyelids or eyelashes. Additionally or alternatively, a mask may be applied to the eye image to reduce the effect of the weak bits. For example, a mask may be applied such that regions of the eye identified as producing weak bits (e.g., upper or lower portions of the iris where occlusion is more likely) may be ignored for iris generation. As another example, the graph 805 may return to eye pose regions that are more likely to generate weak bits to obtain more eye images from those regions, thereby reducing the effect of weak bits on the resulting iris code.
The graphic 805 may also remain in (or return to) the eye-pose area of the display 208 until multiple images are captured or obtained for a particular eye-pose area. That is, instead of comparing the image quality metric for each eye image to an "on-the-fly" image quality threshold, a number of eye images may be obtained from each eye pose region in real time. Each eye image obtained for the eye pose region may then be processed to obtain an image quality metric, which in turn is compared to a corresponding image quality threshold. It can be seen that the eye pose regions of the eye image acquisition process can be performed in parallel or sequentially, depending on the application requirements or requirements.
During this eye image acquisition routine, graphics may be displayed in one or more eye pose regions of display 208 in various modes. For example, the graphic may be displayed in a random mode, a fly mode, a blink mode, a wave mode, or a story mode in a particular eye-pose area (or across two or more eye-pose areas) of the display. The story mode may contain various animations in which the graphics may participate. As just one example of a story pattern, butterflies may fly off cocoons and fly around a particular area of display 208. When the butterfly flies, flowers may appear, and the butterfly can adopt nectar. As can be seen, the story of a butterfly may be displayed in a particular area of display 208 or across two or more areas of display 208.
In the waving mode, the wings of the butterfly may appear to wave in size as the butterfly flies around in a particular area of the display 208. In the random pattern, the exact location of the graphic 805 within a particular region may be random. For example, the graphic 805 may simply appear in a different location in the upper left area. As another example, the graphic 805 may move in a partially random manner within the upper left eye pose area from the initial position 810 a. In the blinking mode, a butterfly or a party of a butterfly may appear to blink within a particular region or across two or more regions of the display 208. Various modes are possible in various eye-pose regions of the display 208. For example, graphic 805 may appear in story mode in the upper left area in initial position 810 a; and the graphic 805 may appear in the left middle region in the final position 810b using a blinking pattern.
Graphics may also be displayed in various modes throughout the eye pose area 820r0-820r8 of the display 208. For example, the graphics may appear in a random or sequential manner (referred to as a random pattern or sequential pattern, respectively). As described herein, the graphic 805 may be moved through various regions of the display 208 in a sequential manner. Continuing with the example, graphic 805 may move along path 815 between eye-pose regions of display 208 using an intervening animation. As another example, the graphic 805 may appear in a different area of the display 208 without intervening animation. As another example, a first graphic (e.g., a butterfly) may appear in a first eye pose area, while another graphic (e.g., a bumblebee) may appear in a second eye pose area.
In one embodiment, different graphics may appear continuously (in series) from one region to the next. Alternatively, in another embodiment, various graphics may be used in the story mode, as different graphics appear in different eye-pose areas to tell the story. For example, a cocoon may appear in one eye pose area and then a butterfly appears in another eye pose area. In various embodiments, the different patterns may also be randomly distributed throughout the eye pose area because the eye image acquisition process may direct the eye from one eye pose area to another, with a different pattern appearing in each eye pose area.
Eye images may also be obtained in a random manner. Thus, the graphic 805 may also be displayed in a random manner in the various eye-pose regions of the display 208. For example, graphic 805 may appear in the upper-middle region, and once an eye image is obtained for that region, graphic 805 may thereafter appear in the lower-right eye-pose region (e.g., assignment region 820r8) of display 208 in fig. 8. As another example, the graphic 805 may be displayed in a seemingly random manner, with the graphic 805 being displayed at least once on each eye-pose area, without repetition on a single area, until the graphic 805 has been displayed in other areas. This pseudo-random display mode may occur until a sufficient number of eye images are obtained for an image quality threshold or some other application. Thus, various eye poses of one or both eyes of the wearer can be obtained in a random manner rather than a sequential manner.
In some cases, if an eye image for a certain eye pose area cannot be obtained after a threshold number of attempts (e.g., three eye images captured for the eye pose area fail an image quality threshold), the eye image acquisition routine may skip or pause acquisition on the eye pose area for a period of time while obtaining eye images from one or more other eye pose areas first. In one embodiment, the eye image acquisition routine may not obtain an eye image for a certain eye pose area if the eye image cannot be obtained after a threshold number of attempts.
The eye pose may be described with respect to a natural resting pose (e.g., the user's face and line of sight are both oriented such that they will be directed toward a distant object directly in front of the user). The natural resting position of the eye may be indicated by a natural resting position, which is a direction orthogonal to the surface of the eye when in the natural resting position (e.g., directly out of the plane of the eye). As the eyes move towards different objects, the eye posture changes relative to the natural resting position. Thus, the current eye pose may be measured with reference to the eye pose directions as follows: the eye pose direction is the direction normal to the eye surface (and centered on the pupil) but towards the object at which the eye is currently pointing.
Referring to an exemplary coordinate system, the pose of the eye may be represented as two angular parameters indicating the azimuthal and zenith deflections of the eye's pose direction, both of which are natural resting positions relative to the eye. These angular parameters may be expressed as θ (azimuth deflection measured from a reference azimuth) and φ (zenith deflection, sometimes also referred to as polar deflection). In some embodiments, angular scrolling (angular roll) of the eye about the direction of the eye pose may be included in the measurement of the eye pose, and the angular scrolling may be included in the following analysis. In other embodiments, other techniques for measuring eye pose may be used, such as pitch, yaw, and optionally roll systems. Using such an eye pose representation, eye poses expressed as azimuthal and zenith deflections may be associated with particular eye pose regions. Thus, the eye pose may be determined from each eye image obtained during the eye image acquisition process. Such associations between eye poses of eye images, eye regions may be stored in the data modules 224, 232, or made accessible by the processing modules 224, 228 (e.g., accessible via cloud storage).
Eye images may also be selectively obtained. For example, certain eye images of a particular wearer may have been stored or accessible by the processing modules 224, 228. As another example, certain eye images of a particular wearer may already be associated with certain eye pose areas. In this case, the graphic 805 may only appear in one eye-pose region or a particular eye-pose region that does not have an eye image associated with the eye-pose region or the particular eye-pose region. Illustratively, eye images may have been obtained for eye regions No. 1, 3, 6, 8, while eye images have not been obtained for other eye pose regions No.2, 4, 5, 7. Thus, graph 805 may appear in pose regions in nos. 2, 4, 5, and 7 until an eye image is obtained for each respective eye pose region that passes the image quality metric threshold.
As described further below in fig. 9, routine 900 depicts an exemplary workflow for acquiring eye images to determine whether they pass an image quality threshold and whether to utilize the images in the generation of iris codes. Thus, in some implementations, the processing modules 224, 228 may implement an eye image acquisition routine using graphics to obtain an image that passes an image quality threshold. For example, the processing modules 224, 228 may be caused to perform aspects of an eye image set acquisition routine. Additionally or alternatively, the controller 450 may be programmed to cause aspects of an eye image set acquisition routine to be performed. In some scenarios, having obtained an eye image that passes an image quality threshold for each eye pose region using the various eye image acquisition routines described herein, the eye images (or representations of the eye images) may be combined or merged using various techniques for generating or extracting iris codes. As described further below, various such eye image combination routines may be used to generate iris codes.
Example of an eye image acquisition routine
Fig. 9 is a flow diagram of an exemplary eye image acquisition routine. Routine 900 depicts an exemplary workflow for acquiring an eye image using a graph and one or more image quality thresholds. At block 904, the eye pose area is associated with a display area of a display (such as a display of a head mounted display). For example, the first eye pose region may be an upper left region of the display and the second eye pose region may be a lower right region of the display.
At block 908, a graphic is displayed in the first eye pose area. For example, as described herein, the graphic may be a butterfly animation. At block 912, a first eye image is obtained that is associated with a first eye pose region. For example, when the graphic is displayed in the upper left display region, the image capture device may capture a first eye image associated with the upper left eye pose region. The image capture device may be an inward facing imaging system 452 shown in fig. 452.
At block 9216, it is determined that an image metric for the first eye image passes a first image quality threshold. For example, the blur metric of the first eye image may pass a blur quality threshold. This may indicate that the quality of the first eye image obtained in the first eye pose region is of sufficient quality to be utilized in biometric applications. As described herein, various image quality metrics are possible. In another embodiment, a color saturation metric for a first eye image may be used to determine whether a color saturation threshold for the first eye image is passed. The color may include a color in the visual sense (e.g., red, green, blue, etc.). For infrared images, the colors can include various spectral bands in the infrared (e.g., 700nm-800nm, 800nm-900nm, etc.). In some cases, the contrast of the eye image may increase in the near infrared (from about 700nm to about 1000nm), and the image acquisition 900 routine may obtain the near infrared image at block 912.
Alternatively or additionally, at block 916, if it is determined that the image metric of the first eye image passes the first image quality threshold, additional images may be obtained in the same first eye pose region. For example, the blur metric of the first eye image may pass a blur quality threshold. Such an image quality threshold may indicate that the obtained eye image is not of sufficient quality to be utilized in biometric applications. As discussed herein, passing a threshold may mean passing above or passing below, as appropriate, depending on the context, depending on whether the image quality metric increases or decreases to indicate increased image quality. Thus, the graphic may further be displayed in a specific eye-pose area of the display. The graphics may continue to be animated or displayed in the upper left portion of the display so that additional eye images may be obtained. An image metric for the eye image may be determined and compared to an image quality threshold. When an eye image having an image metric that passes a blur quality threshold is obtained, the eye image may be considered the first eye image (in this example, the corresponding first eye pose region of the display) that has been determined to pass the first image quality threshold.
At block 920, a graphic is displayed in the second eye pose area. For example, as described herein, the graphic may be a butterfly animation that travels along a path from the upper left display area to the lower right display area. At block 924, a second eye image is obtained that is associated with the second eye pose region. For example, when the graphic is displayed in the lower right display region, the image capture device may capture a second eye image that is then associated with the corresponding eye pose region that was previously associated with the lower right display region (e.g., second eye pose region).
At block 928, it is determined that the image metric of the second eye image passes the second image quality threshold. For example, the blur metric for the second eye image may pass through a blur quality threshold (e.g., the blur quality threshold for the second image quality metric may be the same blur quality threshold for the first image quality metric). This may indicate that the quality of the second eye image obtained in the second eye pose region is of sufficient quality to be utilized in biometric applications.
At block 932, an iris code is determined for the human eye based on the first eye image and the second eye image. For example, iris images from the first and second eye images may be used to generate an iris code in accordance with various techniques described herein. At block 940, the determined iris code may be used for a biometric application or image display of a head mounted display. For example, in one embodiment, the determined iris code may be used to determine an eye pose of the associated eye of the first and second eye images.
In various embodiments, routine 900 may be performed by a hardware processor of a head-mounted display system (such as an embodiment of display system 200). In other embodiments, a remote computing device having computer-executable instructions may cause the head-mounted display system to perform aspects of routine 900. For example, the remote computing device may be caused to display a graphic in the first eye pose region, or the remote computing device may be caused to use the determined iris code for a biometric application.
Eye image combination example
As described above, an eye of a wearer of a Head Mounted Display (HMD) (e.g., wearable display system 200 shown in fig. 2 or display systems 400 in fig. 4 and 6) may be imaged using an image capture device such as a camera or inward facing imaging system 452 (see, e.g., fig. 4). Eye image combining techniques may be used to combine or merge certain eye images obtained from an image capture device into one or more blended eye images. The hybrid eye image may be used for various biometric applications. For example, in some implementations, the blended eye image can be used to determine an eye pose (e.g., a direction of one or both eyes of the wearer) or to generate an iris code.
The local processing and data module 224 and/or the remote data repository 232 may store image files, audio files, or video files, as described herein. For example, in various embodiments, the data module 224 and/or the remote data repository 232 may store a plurality of eye images to be processed by the local processing and data module 224. The local processing and data module 224 and/or the remote processing module 228 may be programmed to use the eye image combining techniques disclosed herein in biometric extraction or generation, for example, to identify or authenticate the identity of the wearer 204. Alternatively or additionally, the processing modules 224, 228 may be programmed to use the eye image combination techniques disclosed herein in eye pose estimation, for example, to determine the direction each eye is looking in.
The image capture device may capture video for a particular application (e.g., video of a wearer's eyes for an eye tracking application, or video of a hand or finger of a wearer for a gesture recognition application). The video may be analyzed by one or both of the processing modules 224, 228 using an eye image set selection technique. From this analysis, the processing modules 224, 228 may perform eye composition techniques and/or biological extraction or generation, etc. For example, the local processing and data module 224 and/or the remote processing module 228 may be programmed to store eye images obtained from one or more image capture devices attached to the frame 212. Further, the local processing and data module 224 and/or the remote processing module 228 may be programmed to process the eye images to combine the eye images of the wearer 204 of the wearable display system 200 using the techniques described herein (e.g., routine 1000). For example, the processing modules 224, 228 may be caused to perform aspects of the eye image combining technique. Additionally or alternatively, the controller 450 may be programmed to cause aspects of the eye image combining technique to be performed.
In some cases, offloading at least some of the eye image set selection techniques to a remote processing module (e.g., in the "cloud") may improve the efficiency or speed of the computation. Such eye image selection techniques may facilitate removal of focus errors in the eye image, illumination effects present in the eye image, or any other image distortions present in the eye image. For example, to facilitate removal of such distortions, the eye image selection techniques disclosed herein may be advantageously used to estimate the portion of the iris that is occluded by the eyelid. The combining technique may combine the plurality of eye images into a single eye image representing a portion of each of the eye images.
In general, the eye images may be combined using image fusion techniques or iris code merging techniques. For example, the image fusion technique may combine multiple images through various image fusion methods (e.g., super resolution) to produce a single blended image from which iris codes are extracted or generated (e.g., routine 1000 described with reference to fig. 10, fig. 10 depicting an exemplary workflow of the image fusion technique). For example, an iris code combining technique may generate iris codes separately, one for each eye image, and then combine the calculated iris codes into a single iris code. In various embodiments, an eye image set selection technique (as described herein) may be used to select eye images to combine or merge their corresponding iris codes in accordance with the techniques described herein.
Example of image fusion technique
In an exemplary embodiment of the image fusion technique, the eye image may be obtained in various ways described herein (such as by the inward facing imaging system 452). The eye image may include an iris of a human eye. The eye pose may be estimated or determined for the eye image. For example, as described herein, the eye pose may be represented as two angular parameters that indicate an azimuthal deflection and a zenith deflection of the eye pose direction of the eye, both of which are natural resting positions relative to the eye. Such a representation may be represented as a digital representation of the eye image. That is, the determined eye pose represents the obtained image.
The determined eye pose may be assigned to an eye pose region associated with, for example, a particular region assignment. For example, the eye image may be associated with a particular eye pose area of display 208. As described herein with respect to FIG. 8, each eye pose region 820r0-820r8 of display 208 may be an assigned region number corresponding to the eye pose region in which the eye image was obtained. In general, any display classification (including region assignments associated with regions of the display) may be used to associate the obtained eye image with that classification. In one embodiment, the determined eye poses may be assigned to pose quadrants (e.g., regions of a display having four eye pose regions). For example, eye images may be obtained for four eye pose regions (or, in this example, eye pose quadrants) of the display 208. Each eye image may be associated with one of the pose quadrants, and thus the determined eye pose is associated with one of the pose quadrants. In other embodiments, fewer (e.g., 2) or more (e.g., 4, 5, 6, 9, 12, 18, 24, 36, 49, 64, 128, 256, 1000 or more) eye pose regions may be used to obtain the eye image. As described herein, in some embodiments, multiple eye images may be obtained for each bin (bin), and only a subset of images having an image quality metric that passes an image quality threshold are retained for subsequent iris code generation.
Continuing with an exemplary embodiment of the image fusion technique, the eye images or the determined eye pose representations may be combined into a hybrid eye image or a hybrid eye pose representation, respectively. For example, each image associated with a particular pose region may be used to contribute to the overall fused image. The fused image may be a weighted sum of the individual images. For example, pixels in an image may be assigned weights based on a quality factor Q as described herein, where the greater Q, the greater the weight. Other image fusion techniques may be used including, for example, super-resolution, wavelet transform image fusion, Principal Component Analysis (PCA) image fusion, high-pass filtering, high-pass modulation, pairwise spatial frequency matching image fusion, spatial or transform domain image fusion, and the like. Image fusion techniques may include intensity-hue-saturation (IHS), ratio transformation (Brovey transform), atrous algorithm based wavelet transformation, direct fusion of gray-level pixels of polar iris texture, and multi-resolution analysis based intensity modulation (MRAIM) techniques.
With the fused images, iris codes may be generated as described herein with respect to an example of an eye image set selection routine. The generated iris code may represent an overall iris code of the obtained eye image.
Iris code fusion technique example
In an exemplary embodiment of the iris code combining technique, an eye image may be obtained in various manners described herein (such as by the inward facing imaging system 452). The eye pose may be estimated or determined for the eye image. For example, as described herein, the eye pose may be represented as two angular parameters that indicate an azimuthal deflection and a zenith deflection of the eye pose direction of the eye, both of which are natural resting positions relative to the eye. Such a representation may be represented as a digital representation of the eye image. That is, the determined eye pose represents the obtained image.
The determined eye pose may be assigned to an eye pose region associated with the particular region assignment. For example, the eye image may be associated with a particular eye pose area of display 208. As described herein with respect to fig. 8, each eye-pose region of display 208 may be associated with a region assignment corresponding to the eye-pose region in which the eye image has been obtained. In one embodiment, the determined eye poses may be assigned to pose quadrants (e.g., regions of a display having four eye pose regions). For example, eye images may be obtained for four eye pose regions (or, in this example, eye pose quadrants) of the display 208. Each eye image may be associated with one of the pose quadrants, and thus the determined eye pose is associated with one of the pose quadrants. In other embodiments, fewer (e.g., 2) or more (e.g., 4, 5, 6, 9, 12, 18, 24, 36, 49, 64, 128, 256, 1000 or more) eye pose regions may be used to obtain the eye image. As described herein, in some embodiments, multiple eye images may be obtained for each bin (bin), and only a subset of images having an image quality metric that passes an image quality threshold are retained for subsequent iris code generation.
Continuing with an exemplary embodiment of the iris code merging technique, an iris code may be generated for each obtained eye image. For example, as described herein with respect to an example of an eye image set selection routine, an iris code may be determined for an eye image. Each iris code may also be associated with a pose region of the eye image with which it has been determined to be associated. The iris codes for each eye image may be combined or fused using a median filter, a bayesian filter, or any filter configured to merge iris codes into a hybrid iris code (also referred to as a merged iris code). For example, a merged iris code may be generated by identifying weak bits (e.g., bits that frequently change between 0 and 1) in the code and merging or masking such weak bits. In another example, the merged iris code is a weighted sum of the individual iris codes, where the weights are based on the quality of each individual iris code. For example, each iris code associated with a particular eye pose region may be used to contribute to the entire hybrid iris code. The hybrid iris code may be generated as described herein. The hybrid iris code may represent an overall iris code for the obtained eye image.
Confidence score example
In some implementations, a confidence score can be associated with the determined iris code. That is, a confidence score may be assigned to iris codes determined using an image fusion technique or an iris code merging technique. Confidence scores may be assigned to the resulting iris codes based on the diversity of the sampled regions. The multiplicity of sampled regions may be, for example, the number of pose regions represented by the eye image or a representation of the eye image with different region assignments for generating the combined iris code. The confidence score may be determined based on any function of the sampled pose regions. As just one example, a zero score may be assigned if none of the possible regions have been sampled. As another example, a score of N/N may be assigned to any measurement that only samples N regions from among the N possible regions. In another embodiment, analysis of the eye image itself (e.g., determining eye pose) may generate a probability or confidence score associated with certain cells (cells) in the eye image. In this case, an overall probability or an overall confidence may also be generated based on the individual unit-specific probabilities of the eye image. For example, the individual unit-specific probabilities may be multiplied to generate an overall probability. In another embodiment, an image quality threshold may be used to determine a confidence score.
The confidence scores may be used for various biometric applications. For example, the biometric application may use a biometric security confidence threshold that is quantitatively related to the confidence score. For example, the biometric security confidence threshold may be related to an access level of an application associated with the biometric data. The access level may be an image quality level for an access account confidence score. As described herein, an image quality level may be determined based on an image quality metric of an obtained eye image. Thus, the image quality level of the obtained eye image may be implicitly correlated to a biometric security confidence threshold. Any application associated with biometric data executing on the processing module 224, 228 with the head-mounted display may be caused to terminate execution if the confidence score fails the biometric security confidence threshold. In this case, additional images may be acquired using an image acquisition routine (e.g., routine 900), as described herein, to obtain additional images corresponding to a certain image quality level or threshold.
If the confidence score does pass the biometric security confidence threshold, the approval indication may be caused to be displayed on a display of the head-mounted display (e.g., display 208). For example, the approval indication may be any one of a safety bar on a display of the HMD, a sound emitted from an audio unit of the HMD, a textual indication on a display of the HMD, or a combination of any of the foregoing. In one embodiment, a request for access to an application for a financial transaction may be sent from the processing module 224, 228 after an approval indication has been displayed on the head-mounted display.
While the foregoing examples have been described with respect to approval indications to be displayed in display 208, this is for illustration only and is not intended to be limiting. In other embodiments, any approval indication may be based on a confidence score. For example, the confidence score may be used to generate an approval indication where the biometric data is processed in cloud-based computing for the processing modules 224, 228. For example, the approval indication may be a representation of an approval of an application of the financial transaction.
Eye image combination routine example
Fig. 10 is a flow diagram of an exemplary eye image combination routine 1000. Routine 1000 depicts an exemplary workflow for combining eye images using an image fusion technique (e.g., the image fusion technique described herein with respect to examples of eye image combining).
At block 1004, an eye image is obtained. The eye images may be obtained from a variety of sources including, but not limited to, an image capture device, a head-mounted display system, a server, a non-transitory computer-readable medium, or a client computing device (e.g., a smartphone).
Continuing with routine 1000, at block 1008, an eye pose is identified for each obtained eye image. For example, if the obtained eye image has been captured in a particular eye pose area of the display, the eye pose may be associated with a particular display classification (e.g., the eye pose area assignment disclosed with reference to fig. 8). As just one example, an eye pose may be assigned to a region for the eye pose region. In this recognition at block 1008, an alternative representation of each recognized eye pose may also be determined. For example, as described herein, the eye pose for an eye image may be represented as two angular parameters (e.g., azimuthal deflection and zenith deflection of the eye pose direction of the eye).
Depending on whether an iris code fusion technique or an image fusion technique is used, the routine 1000 proceeds to block 1012 or block 1016, each having a respective branch of the routine 1000. One or both techniques may be performed. In various embodiments, these two techniques may be performed in parallel or sequentially. As will be described further below, iris codes may be used in a variety of biometric applications with confidence scores generated from each respective technique. In embodiments where the routine 1000 proceeds along two branches of the routine 1000, their respective confidence scores may be used to compare the accuracy of the techniques. In one embodiment, iris codes generated by techniques with higher generated confidence scores may be used for further utilization in one or more biometric applications. The iris codes with lower confidence scores may be discarded or not used. In another embodiment, both iris codes may be used in biometric applications. For example, filters or other techniques may be used to average the respective iris codes generated by each technique to generate an average of the iris codes.
Continuing the routine 1000 along the left branch, at block 1012, the obtained eye images or the alternate representations of the recognized eye poses may be fused into a blended image or blended representation, respectively. For example, the eye images may be combined by various image fusion methods (e.g., super resolution, spatial domain fusion, or transform domain fusion) to produce a single blended image. As another example, each alternative representation may be fused into the mixed eye image by the alternative representation associated with each eye pose region used to contribute to the mixed eye image. At block 1020, an iris code may be generated for the human eye based on the mixed eye image. For example, an iris of a hybrid image may be used to generate an iris code according to various techniques described herein.
Routine 1000 continues along the right branch at block 1008, and at block 1016, an iris code may be generated using each obtained eye image or alternate representation of the recognized eye pose. At block 1024, a hybrid iris code is generated for the human eye based on iris codes from several eye images (e.g., eye images of several eye pose regions). Each iris code associated with a particular eye pose region may contribute to the overall hybrid iris code. The iris codes of the eye image may be fused using a median filter, a bayesian filter, or any filter configured to merge iris codes into a hybrid iris code.
After routine 1000 proceeds down the left branch, the right branch, or both branches in parallel or sequentially, confidence scores are generated for the iris codes or hybrid iris codes at block 1028. If both branches are performed in parallel or sequentially, confidence scores may be generated for the iris codes generated at block 1020 or the hybrid iris codes generated at block 1024. The confidence score may be determined based on any function of the sampled eye pose regions (e.g., diversity of assigned regions). As just one example, a zero score may be assigned if all possible regions are not sampled.
At block 1032, the determined iris code may be used for a biometric application or image display of a head mounted display. For example, in one embodiment, the generated iris code may be used to determine an eye pose of the associated eye for the eye image obtained at block 1004.
In various embodiments, routine 1000 may be performed by a hardware processor of a head mounted display system (such as an embodiment of display system 200). In other embodiments, a remote computing device having computer-executable instructions may cause the head mounted display system to perform routine 1000. For example, the remote computing device may be caused to fuse the eye images into a hybrid eye image, or the remote computing device may be caused to merge iris codes of the eye images into a merged iris code.
Other aspects
Any element disclosed herein with respect to any aspect of eye image combination, or eye image set selection may be used in combination with or in place of any other element disclosed herein with respect to any aspect of eye image combination, eye image set selection.
Other aspects relating to eye image acquisition
In an aspect, a wearable display system is disclosed. The wearable display system includes: an image capture device configured to capture an eye image from a wearer of the wearable display system; a non-transitory memory configured to store an eye image; a display comprising a plurality of eye pose areas; and a processor in communication with the non-transitory memory and the display, the processor programmed to: causing an avatar to be displayed in a first eye pose region of a plurality of eye pose regions on a display; obtaining a first eye image of an eye from an image capture device, wherein the first eye image is associated with a first eye pose region; determining that a first image quality metric for the first eye image passes a first image quality threshold; causing the avatar to be displayed in a second eye pose region of the plurality of eye regions on the display; obtaining a second eye image of the eye from the image capture device, wherein the second eye image is associated with a second eye pose region; determining that a second image quality metric for a second eye image passes a second image quality threshold; generating an iris code of the eye based at least in part on the first eye image and the second eye image; and using, by the wearable display system, the generated iris code for subsequent display of an image on a display or for biometric applications.
In aspect 2, the wearable display system of aspect 1, wherein the processor is further programmed to: display modes of an avatar in a plurality of eye regions on a display are received.
In aspect 3, the wearable display system of any of aspects 1-2, wherein the processor is further programmed to: a display mode of an avatar in a first eye region on a display is received.
In aspect 4, the wearable display system of any of aspects 2-3, wherein the display mode comprises at least one of a random mode, a sequential mode, a flight mode, a blinking mode, a waving mode, a story mode, or a combination thereof.
In aspect 5, the wearable display system of any of aspects 1-4, wherein the first image quality threshold corresponds to an image quality level of the first eye pose region, and wherein the second image quality threshold corresponds to an image quality level of the second eye pose region.
In aspect 6, the wearable display system of any of aspects 1-5, wherein the first image quality threshold corresponds to an image quality level of the first eye image, and wherein the second image quality threshold corresponds to an image quality level of the second eye image.
In aspect 7, the wearable display system of any of aspects 1-6, wherein the processor is further programmed to: causing the avatar to be displayed in a third eye pose region of the plurality of eye regions on the display; obtaining a third eye image of the eye from the image capture device, wherein the third eye image is associated with a third eye pose region; determining that a third image quality metric for the third eye image fails a third image quality threshold.
In an 8 th aspect, the wearable display system of claim 7, wherein the processor is further programmed to: obtaining a fourth eye image of the eye from the image capture device, wherein the fourth eye image is associated with the third eye pose region; and determining that a fourth image quality metric for the fourth eye image passes a third image quality threshold.
In an aspect 9, the wearable display system of aspect 8, wherein to determine the iris code of the eye, the processor is programmed to determine the iris code based at least in part on the first eye image, the second eye image, and the fourth eye image.
In an aspect 10, the wearable display system of aspect 8, wherein the processor is further programmed to: combining the third eye image and the fourth eye image to obtain a blended eye image of the third eye pose region, and wherein, to determine the iris code of the eye, the processor is programmed to determine the iris code based at least in part on the first eye image, the second eye image, and the blended eye image.
In an 11 th aspect, a head mounted display system is disclosed. The head-mounted display system includes: an image capture device configured to capture an eye image; a display comprising a plurality of eye pose areas; a processor in communication with the image capture device and the display, the processor programmed to: causing a first graphic to be displayed in a first eye-pose region of a plurality of eye-pose regions on a display; obtaining a first eye image in a first eye pose region from an image capture device; determining that a first metric for a first eye image passes a first threshold; causing a second graphic to be displayed in a second eye-pose region of the plurality of eye-pose regions on the display; obtaining a second eye image in a second eye pose region from the image capture device; and determining that a second metric for the second eye image passes a second threshold.
In aspect 12, the head mounted display system of aspect 11, wherein the processor is further programmed to: determining an iris code based at least in part on the first eye image and the second eye image; and using the determined iris code by the head-mounted display system for displaying an image on a display or for displaying a biometric application.
In an aspect 13, the head mounted display system according to any of aspects 11-12, wherein the first eye image corresponds to a first eye pose of a user of the head mounted display system.
In aspect 14, the head mounted display system of any of aspects 11-13, wherein the display is configured to present the first graphic or the second graphic to a user of the head mounted display system on a plurality of depth planes.
In an 15 th aspect, the head mounted display system according to any of the 11 th to 14 th aspects, wherein the display comprises a plurality of stacked waveguides.
In aspect 16, the head mounted display system of any of aspects 11-15, wherein the display is configured to present a light field image of the first graphic or the second graphic to a user of the head mounted display system.
In an 17 th aspect, the head mounted display system of any of aspects 11-16, wherein the first graphic directs the eyes toward a first one of the plurality of eye pose areas on the display when the first graphic is displayed in the first one of the plurality of eye pose areas on the display, and wherein the second graphic directs the eyes toward a second one of the plurality of eye pose areas on the display when the second graphic is displayed in the second one of the plurality of eye pose areas on the display.
In aspect 18, the head-mounted display system according to any one of aspects 11 to 17, wherein when the second graphic is displayed in a second eye-pose region of the plurality of eye-pose regions on the display, the second graphic is configured to change the eye pose of the eye.
In an aspect 19, the head mounted display system of any of aspects 11-18, wherein the first graphic or the second graphic comprises a graphical representation of a butterfly.
In aspect 20, the head mounted display system of any of aspects 11-19, wherein the processor is further programmed to: receiving a display mode of graphics in a plurality of eye-pose regions on a display, wherein the display mode comprises at least one of a random mode, a sequential mode, a flight mode, a blinking mode, a wave mode, a story mode, or a combination thereof.
In an aspect 21, the head-mounted display system according to any of aspects 11-20, wherein the first eye-pose region is defined by an eye-pose region that: the eye pose area includes a minimum azimuth deflection, a maximum azimuth deflection, a minimum zenith deflection, and a maximum zenith deflection.
In aspect 22, the head mounted display system according to any one of aspects 11-21, wherein the first graphic is the same as the second graphic.
In an 23 th aspect, a method for generating an iris code is disclosed. The method is under control of a processor and comprises: displaying a graphic along a path connecting a plurality of eye pose regions; obtaining eye images at a plurality of locations along a path; and generating an iris code based at least in part on at least some of the obtained eye images.
In aspect 24, the method of aspect 23, wherein the eye images obtained at the plurality of locations along the path provide a reduction of fragile bits in the iris code.
In a 25 th aspect, the method according to any of the method 23-24 aspects, wherein each of the at least some of the obtained eye images used to generate the iris code has a quality metric that passes a quality threshold.
Other aspects of eye image combination
In an aspect 1, a head-mounted display system is disclosed. The head-mounted display system includes: an image capture device configured to capture a plurality of eye images; a processor programmed to: for each eye image of the plurality of eye images: assigning an eye pose region of a plurality of eye pose regions to each eye image; determining a representation of an eye pose in each eye image; fusing the determined set of representations to generate a hybrid eye image; generating an iris code of the mixed eye image; and determining a confidence score associated with the determined iris code, the confidence score based at least in part on the set of determined representations used to generate the hybrid eye image.
In aspect 2, the head mounted display system of aspect 1, wherein, to fuse the determined set of representations, the processor is programmed to select a set of the determined representations that passes an image quality threshold, wherein the image quality threshold corresponds to an image quality for utilization by the biometric application.
In aspect 3, the head mounted display system of any of aspects 1-2, wherein to fuse the determined set of representations, the processor is programmed to utilize a super resolution algorithm.
In aspect 4, the head-mounted display system of any of aspects 1-3, wherein the processor is further programmed to: determining that the confidence score fails a confidence threshold, wherein the confidence threshold corresponds to a particular level of access to an application associated with the head-mounted display system; and causing the application associated with the head-mounted display system to terminate execution.
In aspect 5, the head-mounted display system of any of aspects 1-4, wherein the processor is further programmed to: determining that the confidence score passes a confidence threshold, wherein the confidence threshold corresponds to a particular level of access to an application associated with the head-mounted display system; and causing an application associated with the head-mounted display system to indicate approval.
In aspect 6, the head mounted display system of aspect 5, wherein the processor is further programmed to modify the level indicator displayed on the head mounted display system to indicate approval by the application.
In aspect 7, the head-mounted display system of aspect 5, wherein the processor is further programmed to cause an audio unit of the head-mounted display system to emit sound.
In an 8 th aspect, the head mounted display system of claim 5, wherein the processor is further programmed to cause the display of the head mounted display system to display the approval text.
In aspect 9, the head-mounted display system of any of aspects 1-8, wherein the processor is further programmed to: biometric data of the eye is determined using the blended eye image.
In an aspect 10, a head-mounted display system is disclosed. The head-mounted display system includes: an image capture device configured to capture a plurality of eye images; a processor programmed to: for each of a plurality of eye images; assigning an eye pose region of a plurality of eye pose regions to each eye image; determining a representation of an eye pose in each eye image; generating an iris code for each eye image; combining each generated iris code to generate a hybrid iris code; a confidence score associated with the blended iris code is determined, the confidence score based at least in part on the determined iris code used to generate the modified eye image.
In an 11 th aspect, the head-mounted display system of claim 10, wherein to incorporate each determined iris code, the processor is programmed to compare each determined iris code to a threshold score, wherein the threshold score corresponds to a quality level of the eye image.
In aspect 12, the head-mounted display system of any of aspects 10-11, wherein to merge each determined iris code, the processor is programmed to utilize at least one of a median filter, a bayesian filter, or any filter configured to merge iris codes.
In aspect 13, the head-mounted display system of any of aspects 10-12, wherein the processor is further programmed to: determining that the confidence score fails a confidence threshold, wherein the confidence threshold corresponds to a particular level of access to an application associated with the head-mounted display system; and causing the application associated with the head-mounted display system to terminate execution.
In aspect 14, the head-mounted display system of any of aspects 10-13, wherein the processor is further programmed to: determining that the confidence score passes a confidence threshold; and causing an application associated with the head-mounted display system to indicate approval.
In an 15 th aspect, the head mounted display system of claim 14, wherein the processor is further programmed to modify the security bar displayed on the head mounted display system.
In aspect 16, the head-mounted display system according to any one of aspects 10-15, wherein the processor is further programmed to cause an audio unit of the head-mounted display system to emit sound.
In an aspect 17, the head mounted display system of any of aspects 10-16, wherein the processor is further programmed to cause a display of the head mounted display system to display the textual indication.
In an 18 th aspect, a method for obtaining an iris code of an eye is disclosed. The method is under control of a processor and comprises: accessing a plurality of eye images; performing (1) an image fusion operation on the plurality of eye images, (2) an iris code fusion operation on the plurality of eye images, or both (1) and (2), wherein the image fusion operation includes: fusing at least some of the plurality of eye images to generate a blended image; generating a hybrid iris code from the hybrid image, and wherein the iris code fusion operation comprises: generating an iris code for at least some of the plurality of eye images; and combining the generated iris codes to generate a hybrid iris code.
In aspect 19, the method of aspect 18, further comprising identifying an eye pose for each of the plurality of eye images.
In aspect 20, the method of any of aspects 18-19, wherein the image fusion operation or iris code fusion operation is performed only on one or more eye images having an image quality metric that passes an image quality threshold.
In aspect 21, the method of any of aspects 18-20, further comprising generating a confidence score for the hybrid iris code.
In aspect 22, the method of any of aspects 18-21, further comprising using the hybrid iris code for biometric applications.
In aspect 23, the method according to any of aspects 18-22, further comprising correcting the fragile bits.
In an aspect 24, a method for processing an eye image is disclosed. The method is under control of a processor and comprises: for each eye image of the first plurality of eye images, assigning an eye pose region of a plurality of eye pose regions to each eye image of the first plurality of eye images; identifying a first eye pose for each eye image of the first plurality of eye images; determining a first digital representation of each identified first eye pose; for each eye pose region of the plurality of eye pose regions, selecting a first non-empty set of the determined first digital representations, wherein each determined first digital representation of the determined first non-empty set of first digital representations passes an image quality threshold; combining the determined first digital representations of the selected first non-empty set of determined first digital representations to generate a first blended image; generating a first iris code of the first mixed image; determining a first confidence score associated with the determined first iris code, the first confidence score based at least in part on a total number of the determined first digital representations in the selected first non-empty set used to generate the first blended image; and using the determined first confidence score for a biometric identification application.
In aspect 25, the method of aspect 24, further comprising: determining that the first confidence score fails a biometric security confidence threshold, wherein the biometric security confidence threshold corresponds to a particular level of access to an application associated with the biometric data; and causing execution of the application to terminate.
In aspect 26, the method according to any one of aspects 24-25, further comprising: obtaining a second plurality of eye images; for each eye image of the second plurality of eye images, assigning an eye pose region of a plurality of eye pose regions to each eye image of the second plurality of eye images; identifying a second eye pose for each eye image of the second plurality of eye images; determining a second digital representation of each identified second eye pose; for each eye pose region of the plurality of eye pose regions, selecting a second non-empty set of the determined second digital representations, wherein each determined second digital representation of the second non-empty set of the determined second digital representations passes an image safety threshold; combining the selected determined second non-empty set of second digital representations to generate a second hybrid eye image; generating a second iris code of a second hybrid image; and determining a second confidence score associated with the determined second iris code, the second confidence score based at least in part on a total number of the determined second digital representations in the selected second non-empty set used to generate the second blended image.
In aspect 27, the method of aspect 26, further comprising: determining that the second confidence score passes a biometric security confidence threshold; and causing the biometric application to indicate an approval indication to a user associated with the first plurality of eye images and the second plurality of eye images.
In aspect 28, the method of aspect 27, wherein the approval indication corresponds to at least one of a rating on a safety bar on a display of a Head Mounted Display (HMD), a sound emitted from an audio unit of the HMD, and a textual indication on the display of the HMD.
In aspect 29, the method of aspect 27, further comprising: a request for access to an application for a financial transaction is transmitted, wherein the request includes a representation of an approval indication.
In aspect 30, the method of aspect 27, wherein the biometric security confidence threshold comprises a level of image quality required to access an account associated with the financial transaction.
In aspect 31, the method of aspect 30, wherein the image quality level required to access the account comprises at least one image quality metric.
In a 32 th aspect, a method for processing an eye image is disclosed. The method is under control of a processor and comprises: for each eye image of a plurality of eye images of an eye, identifying an eye pose of each eye image; determining a digital representation of the recognized eye pose; generating an iris code for each eye image; combining the generated iris codes of each eye image to generate a hybrid iris code; determining a confidence score associated with the hybrid iris code, the confidence score based at least in part on a total number of determined iris codes combined to generate the hybrid iris code; and using the determined confidence score for a biometric identification application.
In aspect 33, the method of aspect 32, further comprising: determining that the confidence score fails a biometric security confidence threshold, wherein the biometric security confidence threshold corresponds to a particular level of access to an application associated with the biometric data; and causing the application associated with the biometric data to terminate execution.
In aspect 34, the method of any of aspects 32-33, further comprising: determining that the confidence score passes a biometric security confidence threshold; and causing the biometric application to indicate an approval indication to a user associated with the plurality of eye images.
In an aspect 35, the method of aspect 34, wherein the approval indication includes at least one of a rating on a safety bar on a display of a Head Mounted Display (HMD), a sound emitted from an audio unit of the HMD, and a textual indication on the display of the HMD.
In aspect 36, the method of any of aspects 32-35, wherein the biometric security confidence threshold corresponds to a level of image quality required to access an account associated with the financial transaction.
In aspect 37, the method according to any one of aspects 32-36, further comprising: sending a request for access to an application for a financial transaction, wherein the request includes a representation of an approval indication.
Other aspects of eye image set selection
In an aspect 1, a head-mounted display system is disclosed. The head-mounted display system includes: an image capture device configured to capture a plurality of eye images of an eye; a processor programmed to: for each eye image of the plurality of eye images: receiving an eye image from an image capture device; determining an image quality metric associated with the eye image; and comparing the determined image quality metric to an image quality threshold to determine whether the eye image passes the image quality threshold, wherein the image quality threshold corresponds to an image quality level used to generate the iris code; selecting an eye image that passes an image quality threshold from a plurality of eye images; and the selected eye image is used to generate an iris code.
In aspect 2, the head-mounted display system of aspect 1, wherein to select an eye image that passes the image quality threshold, the processor is programmed to buffer the selected eye image into a buffer of the processor.
In aspect 3, the head-mounted display system of any of aspects 1-2, wherein to select an eye image that passes the image quality threshold, the processor is programmed to utilize a polar coordinate representation of the eye image.
In aspect 4, the head-mounted display system of any of aspects 1-3, wherein the processor is further programmed to: an image fusion operation and an iris fusion operation are performed.
In aspect 5, the head-mounted display system according to aspect 4, wherein the image fusion operation and the iris fusion operation are performed substantially simultaneously or sequentially to verify the consistency of the generated iris code.
In aspect 6, the head mounted display system of aspect 4, wherein the processor is further programmed to: determining an eye pose of the selected eye image; and determining biometric data of the eye using the eye pose of the selected eye image.
In aspect 7, the head-mounted display system of any of aspects 1-6, wherein to receive the eye image from the image capture device, the processor is programmed to receive the eye image from the image capture device during an eye image acquisition routine implemented by the processor.
In aspect 8, the head mounted display system of aspect 7, wherein the eye image acquisition routine obtains the eye image using graphics.
In aspect 9, the head-mounted display system of any of aspects 1-8, wherein to select an eye image that passes an image quality threshold, the processor is programmed to: buffering a first eye image of the plurality of eye images into a buffer; determining an image quality metric of a second eye image of the plurality of eye images passes the image quality metric of the first eye image; and replacing the first eye image with a second eye image in the buffer, wherein the second eye image corresponds to the selected eye image.
In aspect 10, the head-mounted display system of any of aspects 1-9, wherein the image quality metric corresponds to a blurred image quality associated with a blur of the eye image, wherein the blur of the eye image corresponds to a degree of eye movement in the eye image relative to the reference eye image.
In an 11 th aspect, the head mounted display system of any of aspects 1-10, wherein the image quality metric corresponds to an amount of unobstructed pixels in the eye image.
In aspect 12, the head mounted display system of any of aspects 1-11, wherein the image quality metrics include metrics relating to one or more of: blinking, glare, defocus, resolution, occluded pixels, non-occluded pixels, noise, artifacts, blurring, or a combination thereof.
In aspect 13, the head-mounted display system according to any of aspects 1-12, wherein the image quality metric comprises a weighted combination of the plurality of component quality metrics.
In an aspect 14, a method for processing an eye image is disclosed. The method is under control of a processor and comprises: obtaining a plurality of eye images; for each eye image of the plurality of eye images, determining an image quality metric associated with each eye image; and comparing each determined image quality metric to an image quality threshold to determine whether the eye image passes the image quality threshold, wherein the image quality threshold corresponds to an image quality level used to generate the iris code; selecting a non-empty set of eye images from the plurality of eye images; and using the set of eye images to generate an iris code.
In an 15 th aspect, the method of claim 14, wherein selecting a non-empty set of eye images comprises: a percentage of eye images of the plurality of eye images that pass an image quality threshold to be selected is identified.
In aspect 16, the method of any one of aspects 14-15, wherein selecting a non-empty set of eye images comprises: identifying a total number of eye images to select; and identifying the set of eye images, wherein the determined image quality metric for each eye image is greater than or equal to the determined image quality metric for eye images not in the set.
In aspect 17, the method of any of aspects 14-16, wherein each eye image in the non-empty set of eye images passes an image quality threshold.
In aspect 18, the method according to any one of aspects 14 to 17, further comprising: buffering the non-empty eye image sets and the corresponding determined image quality metrics in a data medium.
In aspect 19, the method of aspect 18, further comprising: obtaining an additional eye image that passes an image quality threshold; selecting an eye image in a non-empty eye image set buffered in a data medium; determining an image quality metric for the additional eye image by an image quality metric for an eye image in a non-empty eye image set buffered in a data medium; in the data medium, replacing the image quality metrics of the selected eye image in the non-empty eye image set buffered in the data medium and the selected eye image in the non-empty eye image set buffered in the data medium with the image quality metrics of the additional eye image and the additional eye image, respectively.
In aspect 20, the method of any of aspects 14-19, wherein the image quality metric comprises a metric related to one or more of: blinking, glare, defocus, resolution, occluded pixels, non-occluded pixels, noise, artifacts, blurring, or a combination thereof.
Conclusion
Each of the processes, methods, and algorithms described herein and/or depicted in the figures may be embodied in, and fully or partially automated by, code modules executed by one or more physical computing systems, hardware computer processors, dedicated circuitry, and/or electronic hardware configured to execute specific and particular computer instructions. For example, a computing system may include a general purpose computer (e.g., a server) or a special purpose computer, special purpose circuitry, and so forth, which are programmed with specific computer instructions. Code modules may be compiled and linked into executable programs, installed in dynamically linked libraries, or written in interpreted programming languages. In some embodiments, certain operations and methods may be performed by circuitry that is specific to a given function.
Furthermore, certain implementations of the functionality of the present disclosure are sufficiently complex, mathematically, computationally, or technically, that dedicated hardware or one or more physical computing devices (with appropriate dedicated executable instructions) may be required to perform the functionality, e.g., due to the number or complexity of computations involved or to provide the results in substantially real time. For example, video may include many frames, each having millions of pixels, and specially programmed computer hardware needs to process the video data to provide the desired image processing task or application in a commercially reasonable amount of time.
The code modules or any type of data may be stored on any type of non-transitory computer readable medium, such as physical computer memory including hard drives, solid state memory, Random Access Memory (RAM), Read Only Memory (ROM), optical disks, volatile or non-volatile memory, the same combinations, and/or the like. The methods and modules (or data) may also be transmitted as a generated data signal (e.g., as part of a carrier wave or other analog or digital propagated signal) over a variety of computer-readable transmission media, including wireless-based and wired/cable-based media, and may take many forms (e.g., as part of a single or multiplexed analog signal, or as multiple discrete digital packets or frames). The results of the disclosed processes or process steps may be stored persistently or otherwise in any type of non-transitory, tangible computer memory, or may be transmitted via a computer-readable transmission medium.
Any process, block, state, step, or function in the flowcharts described herein and/or depicted in the figures should be understood as potentially representing a code module, segment, or portion of code, which includes one or more executable instructions for implementing specific functions (e.g., logical or arithmetic) or steps. Processes, blocks, states, steps or functions may be combined, rearranged, added to, deleted, modified or otherwise altered with the illustrative examples provided herein. In some embodiments, additional or different computing systems or code modules may perform some or all of the functions described herein. The methods and processes described herein are also not limited to any particular order, and the blocks, steps, or states associated therewith may be performed in any other appropriate order, such as serially, in parallel, or in some other manner. Tasks or events can be added to or removed from the disclosed example embodiments. Moreover, the separation of various system components in the embodiments described herein is for illustrative purposes, and should not be understood as requiring such separation in all embodiments. It should be understood that the described program components, methods and systems can generally be integrated together in a single computer product or packaged into multiple computer products. Many implementation variations are possible.
The processes, methods, and systems may be implemented in a networked (or distributed) computing environment. Network environments include enterprise-wide computer networks, intranets, Local Area Networks (LANs), Wide Area Networks (WANs), Personal Area Networks (PANs), cloud computing networks, crowd-sourced computing networks, the internet, and the world wide web. The network may be a wired or wireless network or any other type of communication network.
The systems and methods of the present disclosure each have several inventive aspects, no single one of which is fully responsible for or required for the desirable attributes disclosed herein. The various features and processes described herein may be used independently of one another or may be combined in various ways. All possible combinations and sub-combinations are intended to fall within the scope of the present disclosure. Various modifications to the embodiments described in this disclosure may be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the claims are not intended to be limited to the embodiments shown herein but are to be accorded the widest scope consistent with the present disclosure, the principles and novel features disclosed herein.
Certain features that are described in this specification in the context of separate implementations can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination. No single feature or group of features is essential or essential to each embodiment.
Conditional language, such as "may," "might," "meeting," "e.g.," and the like, as used herein, is generally intended to convey that certain embodiments include but other embodiments do not include certain features, elements and/or steps unless specifically stated otherwise or understood in the context of use. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include instructions for deciding, with or without author input or prompting, whether such features, elements, and/or steps are included or are performed in any particular embodiment. The terms "comprising," "including," "having," and the like, are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and the like. Furthermore, the term "or" is used in its inclusive sense (and not its exclusive sense), so that when used, for example, to connect a list of elements, the term "or" means one, some, or all of the elements in the list. In addition, the articles "a", "an", and "the" as used in this application and the appended claims should be construed to mean "one or more" or "at least one" unless specified otherwise.
As used herein, a phrase referring to "at least one of" a list of items refers to any combination of those items, including a single member. By way of example, "at least one of a, B, or C" is intended to encompass: A. b, C, A and B, A and C, B and C, and A, B and C. Unless specifically stated otherwise, conjunctive languages such as the phrase "X, Y and at least one of Z" are used as is understood with the context, and typically used to express items, terms, etc., may be at least one of X, Y or Z. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y, and at least one of Z to all be present.
Similarly, while operations may be shown in the drawings in a particular order, it should be understood that such operations need not be performed in the particular order shown or in sequential order, or that all illustrated operations need not be performed, to achieve desirable results. Further, the figures may schematically depict one or more example processes in the form of a flow diagram. However, other operations not shown may be incorporated into the exemplary methods and processes illustrated schematically. For example, one or more additional operations may be performed before, after, concurrently with, or between any of the illustrated operations. In addition, in other embodiments, the operations may be rearranged or reordered. In certain situations, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products. Additionally, other implementations are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results.

Claims (45)

1. A wearable display system, comprising:
an image capture device configured to capture an eye image from a wearer of the wearable display system;
a non-transitory memory configured to store the eye image;
a display comprising a plurality of eye pose areas; and
a processor in communication with the non-transitory memory and the display, the processor programmed to:
causing an avatar to be displayed on the display in a first eye pose region of the plurality of eye pose regions;
obtaining a first eye image of an eye from the image capture device, wherein the first eye image is associated with the first eye pose region;
determining that a first image quality metric for the first eye image passes a first image quality threshold indicative of a first image quality level required to generate an iris code using the first eye image;
causing the avatar to be displayed in a second eye pose region of the plurality of eye regions on the display;
obtaining a second eye image of the eye from the image capture device, wherein the second eye image is associated with a second eye pose area;
determining that a second image quality metric for the second eye image passes a second image quality threshold indicative of a second level of image quality required to generate an iris code using the second eye image;
generating the iris code of the eye based at least in part on the first eye image and the second eye image; and
using, by the wearable display system, the generated iris code for subsequent display of an image on the display or for biometric applications.
2. The wearable display system of claim 1, wherein the processor is further programmed to:
receiving a display mode of the avatar in the plurality of eye regions on the display.
3. The wearable display system of claim 1, wherein the processor is further programmed to:
receiving a display mode of the avatar in the first eye region on the display.
4. The wearable display system of any of claims 2-3, wherein the display mode comprises at least one of a random mode, a sequential mode, a flight mode, a blinking mode, a wave mode, a story mode, or a combination thereof.
5. The wearable display system of claim 1,
wherein the first image quality threshold corresponds to an image quality level of the first eye pose region, and
wherein the second image quality threshold corresponds to an image quality level of the second eye pose region.
6. The wearable display system of claim 1,
wherein the first image quality threshold corresponds to an image quality level of the first eye image, and
wherein the second image quality threshold corresponds to an image quality level of the second eye image.
7. The wearable display system of claim 1, wherein the processor is further programmed to:
causing the avatar to be displayed in a third eye pose region of the plurality of eye regions on the display;
obtaining a third eye image of the eye from the image capture device, wherein the third eye image is associated with a third eye pose area;
determining that a third image quality metric for the third eye image fails a third image quality threshold.
8. The wearable display system of claim 7, wherein the processor is further programmed to:
obtaining a fourth eye image of the eye from the image capture device, wherein the fourth eye image is associated with the third eye pose region; and
determining that a fourth image quality metric for the fourth eye image passes the third image quality threshold.
9. The wearable display system of claim 8,
wherein, to determine the iris code of the eye, the processor is programmed to determine the iris code based at least in part on the first eye image, the second eye image, and the fourth eye image.
10. The wearable display system of claim 8,
wherein the processor is further programmed to: combining the third eye image and the fourth eye image to obtain a mixed eye image of the third eye pose region, and
wherein to determine the iris code of the eye, the processor is programmed to determine the iris code based at least in part on the first eye image, the second eye image, and the hybrid eye image.
11. A head-mounted display system, comprising:
an image capture device configured to capture an eye image;
a display comprising a plurality of eye pose areas;
a processor in communication with the image capture device and the display, the processor programmed to:
causing a first graphic to be displayed in a first eye-pose region of the plurality of eye-pose regions on the display;
obtaining a first eye image in the first eye pose region from the image capture device;
determining that a first metric of the first eye image passes a first threshold value, the first threshold value indicating a first level of image quality required to generate an iris code using the first eye image;
causing a second graphic to be displayed in a second eye-pose region of the plurality of eye-pose regions on the display;
obtaining a second eye image in the second eye pose region from the image capture device; and
determining that a second metric for the second eye image passes a second threshold, the second threshold indicating a second level of image quality required to generate an iris code using the second eye image.
12. The head mounted display system of claim 11, wherein the processor is further programmed to:
determining an iris code based at least in part on the first eye image and the second eye image; and
using, by the head-mounted display system, the determined iris code for displaying an image on the display or for biometric applications.
13. The head mounted display system of claim 11, wherein the first eye image corresponds to a first eye pose of a user of the head mounted display system.
14. The head mounted display system of claim 11, wherein the display is configured to present the first graphic or the second graphic to a user of the head mounted display system on a plurality of depth planes.
15. The head mounted display system of claim 14, wherein the display comprises a plurality of stacked waveguides.
16. The head mounted display system of claim 11, wherein the display is configured to present a light field image of the first graphic or the second graphic to a user of the head mounted display system.
17. The head mounted display system of claim 11,
wherein when the first graphic is displayed in the first one of the plurality of eye pose regions on the display, the first graphic directs the eye toward the first one of the plurality of eye pose regions on the display, and
wherein the second graphic directs the eye toward the second one of the plurality of eye-pose areas on the display when the second graphic is displayed in the second one of the plurality of eye-pose areas on the display.
18. The head mounted display system of claim 11,
wherein the second graphic is configured to change an eye pose of the eye when the second graphic is displayed in the second eye pose area of the plurality of eye pose areas on the display.
19. The head mounted display system of claim 11, wherein the first graphic or the second graphic comprises a graphical representation of a butterfly.
20. The head mounted display system of any one of claims 11-19, wherein the processor is further programmed to:
receiving a display mode of the graphic in the plurality of eye pose regions on the display, wherein the display mode comprises at least one of a random mode, a sequential mode, a flight mode, a blinking mode, a waving mode, a story mode, or a combination thereof.
21. The head mounted display system of claim 11, wherein the first eye pose area is defined by an eye pose area that: the eye pose area includes a minimum azimuth deflection, a maximum azimuth deflection, a minimum zenith deflection, and a maximum zenith deflection.
22. The head mounted display system of claim 11, wherein the first graphic is the same as the second graphic.
23. A method for generating an iris code, comprising:
under control of the processor:
displaying a graphic along a path connecting a plurality of eye pose regions;
obtaining eye images at a plurality of locations along the path; and
an iris code is generated based at least in part on at least some of the obtained eye images.
24. The method of claim 23, wherein the eye images obtained at the plurality of locations along the path provide a reduction of fragile bits in the iris code.
25. The method of any of claims 23-24, wherein each of the at least some of the obtained eye images used to generate the iris code has a quality metric that passes a quality threshold.
26. A head-mounted display system, comprising:
an image capture device configured to capture a plurality of eye images of an eye;
a processor programmed to:
for each eye image of the plurality of eye images:
receiving the eye image from the image capture device;
determining an image quality metric associated with the eye image; and
comparing the determined image quality metric to an image quality threshold to determine whether the eye image passes the image quality threshold, wherein the image quality threshold corresponds to a level of image quality required to generate an iris code;
selecting an eye image from the plurality of eye images that passes the image quality threshold; and
the selected eye image is used to generate an iris code.
27. The head mounted display system of claim 26, wherein to select the eye image that passes the image quality threshold, the processor is programmed to buffer the selected eye image into a buffer of the processor.
28. The head-mounted display system of claim 26, wherein to select the eye image that passes the image quality threshold, the processor is programmed to utilize a polar coordinate representation of the eye image.
29. The head mounted display system of claim 26, wherein the processor is further programmed to:
an image fusion operation and an iris fusion operation are performed.
30. The head-mounted display system of claim 29, wherein the image fusion operation and the iris fusion operation are performed substantially simultaneously or sequentially to verify consistency of the generated iris codes.
31. The head mounted display system of claim 29, wherein the processor is further programmed to:
determining an eye pose of the selected eye image; and
determining biometric data of the eye using the eye pose of the selected eye image.
32. The head-mounted display system of claim 26, wherein to receive the eye image from the image capture device, the processor is programmed to receive the eye image from the image capture device during an eye image acquisition routine implemented by the processor.
33. The head mounted display system of claim 32, wherein the eye image acquisition routine utilizes graphics to obtain the eye image.
34. The head-mounted display system of claim 26, wherein to select the eye image that passes the image quality threshold, the processor is programmed to:
buffering a first eye image of the plurality of eye images into a buffer;
determining an image quality metric of a second eye image of the plurality of eye images passes the image quality metric of the first eye image; and
replacing the first eye image with the second eye image in the buffer, wherein the second eye image corresponds to the selected eye image.
35. The head mounted display system of claim 26, wherein the image quality metric corresponds to a blurred image quality associated with a blur of the eye image, wherein the blur of the eye image corresponds to a degree of eye movement in the eye image relative to a reference eye image.
36. The head mounted display system of claim 26, wherein the image quality metric corresponds to an amount of unobstructed pixels in the eye image.
37. The head mounted display system of claim 26, wherein the image quality metric comprises metrics related to one or more of: blinking, glare, defocus, resolution, occluded pixels, non-occluded pixels, noise, artifacts, blurring, or a combination thereof.
38. The head mounted display system of claim 26, wherein the image quality metric comprises a weighted combination of a plurality of component quality metrics.
39. A method of processing an image of an eye, the method comprising:
under control of the processor:
obtaining a plurality of eye images;
for each eye image of the plurality of eye images,
determining an image quality metric associated with each eye image; and
comparing each determined image quality metric to an image quality threshold to determine whether the eye image passes the image quality threshold, wherein the image quality threshold corresponds to a level of image quality required to generate an iris code;
selecting a non-empty set of eye images from the plurality of eye images; and
the set of eye images is used to generate an iris code.
40. The method of claim 39, wherein selecting the non-empty set of eye images comprises:
identifying a percentage of eye images of the plurality of eye images that pass the image quality threshold to select.
41. The method of claim 39, wherein selecting the non-empty set of eye images comprises:
identifying a total number of eye images to select; and
identifying the set of eye images, wherein the determined image quality metric for each eye image is greater than or equal to the determined image quality metric for eye images not in the set.
42. The method of claim 39, wherein each eye image in the non-empty set of eye images passes the image quality threshold.
43. The method of claim 39, further comprising:
buffering the non-empty eye image sets and the corresponding determined image quality metrics in a data medium.
44. The method of claim 43, further comprising:
obtaining an additional eye image that passes the image quality threshold;
selecting an eye image of the set of non-empty eye images buffered in the data medium;
determining an image quality metric for the additional eye image by an image quality metric for the eye image in the non-empty eye image set buffered in the data medium;
replacing, in the data medium, the image quality metrics for the selected eye image in the non-empty eye image set buffered in the data medium and the selected eye image in the non-empty eye image set buffered in the data medium with the image quality metrics for the additional eye image and the additional eye image, respectively.
45. The method of any of claims 39-44, wherein the image quality metric comprises a metric related to one or more of: blinking, glare, defocus, resolution, occluded pixels, non-occluded pixels, noise, artifacts, blurring, or a combination thereof.
CN201780018474.0A 2016-01-19 2017-01-17 Eye image acquisition, selection and combination Active CN108780229B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111528136.6A CN114205574A (en) 2016-01-19 2017-01-17 Eye image acquisition, selection and combination

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
US201662280437P 2016-01-19 2016-01-19
US201662280515P 2016-01-19 2016-01-19
US201662280456P 2016-01-19 2016-01-19
US62/280,515 2016-01-19
US62/280,437 2016-01-19
US62/280,456 2016-01-19
PCT/US2017/013796 WO2017127366A1 (en) 2016-01-19 2017-01-17 Eye image collection, selection, and combination

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202111528136.6A Division CN114205574A (en) 2016-01-19 2017-01-17 Eye image acquisition, selection and combination

Publications (2)

Publication Number Publication Date
CN108780229A CN108780229A (en) 2018-11-09
CN108780229B true CN108780229B (en) 2021-12-21

Family

ID=59313841

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201780018474.0A Active CN108780229B (en) 2016-01-19 2017-01-17 Eye image acquisition, selection and combination
CN202111528136.6A Pending CN114205574A (en) 2016-01-19 2017-01-17 Eye image acquisition, selection and combination

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111528136.6A Pending CN114205574A (en) 2016-01-19 2017-01-17 Eye image acquisition, selection and combination

Country Status (10)

Country Link
US (5) US10831264B2 (en)
EP (1) EP3405829A4 (en)
JP (3) JP6824285B2 (en)
KR (2) KR102567431B1 (en)
CN (2) CN108780229B (en)
AU (2) AU2017208994B2 (en)
CA (1) CA3011637A1 (en)
IL (2) IL272891B2 (en)
NZ (1) NZ744400A (en)
WO (1) WO2017127366A1 (en)

Families Citing this family (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102255351B1 (en) * 2014-09-11 2021-05-24 삼성전자주식회사 Method and apparatus for iris recognition
CN104834852B (en) * 2015-05-04 2018-07-13 惠州Tcl移动通信有限公司 A kind of method and system that mobile terminal is unlocked based on high quality eyeprint image
US10043075B2 (en) * 2015-11-19 2018-08-07 Microsoft Technology Licensing, Llc Eye feature identification
KR102567431B1 (en) 2016-01-19 2023-08-14 매직 립, 인코포레이티드 Eye image collection, selection, and combination
JP7090601B2 (en) 2016-10-05 2022-06-24 マジック リープ, インコーポレイテッド Peripheral test for mixed reality calibration
EP4123425A1 (en) 2017-05-31 2023-01-25 Magic Leap, Inc. Eye tracking calibration techniques
US11280603B2 (en) 2017-08-01 2022-03-22 Otm Technologies Ltd. Optical systems and methods for measuring rotational movement
CN107577344B (en) * 2017-09-01 2020-04-28 广州励丰文化科技股份有限公司 Interactive mode switching control method and system of MR head display equipment
TWI658429B (en) * 2017-09-07 2019-05-01 宏碁股份有限公司 Image combination method and image combination system
WO2019060741A1 (en) 2017-09-21 2019-03-28 Magic Leap, Inc. Augmented reality display with waveguide configured to capture images of eye and/or environment
US10521662B2 (en) 2018-01-12 2019-12-31 Microsoft Technology Licensing, Llc Unguided passive biometric enrollment
JP7390297B2 (en) 2018-01-17 2023-12-01 マジック リープ, インコーポレイテッド Eye rotation center determination, depth plane selection, and rendering camera positioning within the display system
JP7291708B2 (en) 2018-01-17 2023-06-15 マジック リープ, インコーポレイテッド Display system and method for determining alignment between display and user's eye
CN110163460B (en) * 2018-03-30 2023-09-19 腾讯科技(深圳)有限公司 Method and equipment for determining application score
WO2019204765A1 (en) 2018-04-19 2019-10-24 Magic Leap, Inc. Systems and methods for operating a display system based on user perceptibility
EP3827426A4 (en) * 2018-07-24 2022-07-27 Magic Leap, Inc. Display systems and methods for determining registration between a display and eyes of a user
US10984586B2 (en) * 2018-07-27 2021-04-20 Microsoft Technology Licensing, Llc Spatial mapping fusion from diverse sensing sources
US10964111B2 (en) 2018-07-27 2021-03-30 Microsoft Technology Licensing, Llc Controlling content included in a spatial mapping
JP7444861B2 (en) 2018-09-26 2024-03-06 マジック リープ, インコーポレイテッド Diffractive optical element with refractive power
US11580874B1 (en) * 2018-11-08 2023-02-14 Duke University Methods, systems, and computer readable media for automated attention assessment
US11813054B1 (en) 2018-11-08 2023-11-14 Duke University Methods, systems, and computer readable media for conducting an automatic assessment of postural control of a subject
US11846778B2 (en) 2019-03-20 2023-12-19 Magic Leap, Inc. System for providing illumination of the eye
US11181973B2 (en) * 2019-05-09 2021-11-23 Apple Inc. Techniques related to configuring a display device
US11074676B2 (en) * 2019-08-22 2021-07-27 Adobe Inc. Correction of misaligned eyes in images
EP4073689A4 (en) 2019-12-09 2022-12-14 Magic Leap, Inc. Systems and methods for operating a head-mounted display system based on user identity
US11474358B2 (en) 2020-03-20 2022-10-18 Magic Leap, Inc. Systems and methods for retinal imaging and tracking
WO2021247435A1 (en) * 2020-06-05 2021-12-09 Magic Leap, Inc. Enhanced eye tracking techniques based on neural network analysis of images
CN111784806A (en) * 2020-07-03 2020-10-16 珠海金山网络游戏科技有限公司 Virtual character drawing method and device
TW202206030A (en) * 2020-07-14 2022-02-16 美商外科劇院股份有限公司 System and method for four-dimensional angiography
EP4217920A1 (en) * 2020-09-25 2023-08-02 Apple Inc. Automatic selection of biometric based on quality of acquired image
US11656688B2 (en) * 2020-12-03 2023-05-23 Dell Products L.P. System and method for gesture enablement and information provisioning

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7486806B2 (en) * 2002-09-13 2009-02-03 Panasonic Corporation Iris encoding method, individual authentication method, iris code registration device, iris authentication device, and iris authentication program

Family Cites Families (99)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5291560A (en) 1991-07-15 1994-03-01 Iri Scan Incorporated Biometric personal identification system based on iris analysis
US6222525B1 (en) 1992-03-05 2001-04-24 Brad A. Armstrong Image controllers with sheet connected sensors
US5670988A (en) 1995-09-05 1997-09-23 Interlink Electronics, Inc. Trigger operated electronic device
KR100244764B1 (en) * 1997-05-23 2000-03-02 전주범 Apparatus for offering virtual reality service using the iris pattern of user and method thereof
JP3315648B2 (en) * 1998-07-17 2002-08-19 沖電気工業株式会社 Iris code generation device and iris recognition system
JP2001167252A (en) 1999-12-10 2001-06-22 Oki Electric Ind Co Ltd Ophthalmic image preparing method, method and device for identifying iris
JP3995181B2 (en) 1999-12-14 2007-10-24 沖電気工業株式会社 Individual identification device
KR100453943B1 (en) * 2001-12-03 2004-10-20 주식회사 세넥스테크놀로지 Iris image processing recognizing method and system for personal identification
CN2513168Y (en) * 2001-12-05 2002-09-25 南宫钟 Idnetify confirming device with iris identification function
EP1600898B1 (en) 2002-02-05 2018-10-17 Panasonic Intellectual Property Management Co., Ltd. Personal authentication method, personal authentication apparatus and image capturing device
JP2004086614A (en) 2002-08-27 2004-03-18 Matsushita Electric Ind Co Ltd Eye image pickup device and eye image verifying device
JP4162503B2 (en) 2003-01-31 2008-10-08 富士通株式会社 Eye state determination device, eye state determination method, and computer program
US20050021980A1 (en) * 2003-06-23 2005-01-27 Yoichi Kanai Access control decision system, access control enforcing system, and security policy
WO2005008590A1 (en) 2003-07-17 2005-01-27 Matsushita Electric Industrial Co.,Ltd. Iris code generation method, individual authentication method, iris code entry device, individual authentication device, and individual certification program
US7756301B2 (en) 2005-01-26 2010-07-13 Honeywell International Inc. Iris recognition system and method
KR20050025927A (en) 2003-09-08 2005-03-14 유웅덕 The pupil detection method and shape descriptor extraction method for a iris recognition, iris feature extraction apparatus and method, and iris recognition system and method using its
US7336806B2 (en) 2004-03-22 2008-02-26 Microsoft Corporation Iris-based biometric identification
USD514570S1 (en) 2004-06-24 2006-02-07 Microsoft Corporation Region of a fingerprint scanning device with an illuminated ring
KR100629550B1 (en) 2004-11-22 2006-09-27 아이리텍 잉크 Multiscale Variable Domain Decomposition Method and System for Iris Identification
US20070081123A1 (en) 2005-10-07 2007-04-12 Lewis Scott W Digital eyewear
US11428937B2 (en) 2005-10-07 2022-08-30 Percept Technologies Enhanced optical and perceptual digital eyewear
US8696113B2 (en) 2005-10-07 2014-04-15 Percept Technologies Inc. Enhanced optical and perceptual digital eyewear
KR100729813B1 (en) 2006-01-20 2007-06-18 (주)자이리스 Photographing appararus for iris authentication, photographing module for iris authentication and terminal having the same
GB0603411D0 (en) * 2006-02-21 2006-03-29 Xvista Ltd Method of processing an image of an eye
EP1991948B1 (en) * 2006-03-03 2010-06-09 Honeywell International Inc. An iris recognition system having image quality metrics
JP4752660B2 (en) 2006-07-28 2011-08-17 沖電気工業株式会社 Personal authentication method and personal authentication device
JP2008052510A (en) * 2006-08-24 2008-03-06 Oki Electric Ind Co Ltd Iris imaging apparatus, iris authentication apparatus, iris imaging method, iris authentication method
GB0625912D0 (en) 2006-12-22 2007-02-07 Bid Instr Ltd Method for visual field testing
US9117119B2 (en) 2007-09-01 2015-08-25 Eyelock, Inc. Mobile identity platform
JP5277365B2 (en) 2008-04-06 2013-08-28 国立大学法人九州工業大学 Personal authentication method and personal authentication device used therefor
US8411910B2 (en) * 2008-04-17 2013-04-02 Biometricore, Inc. Computationally efficient feature extraction and matching iris recognition
US8345936B2 (en) 2008-05-09 2013-01-01 Noblis, Inc. Multispectral iris fusion for enhancement and interoperability
KR101030652B1 (en) * 2008-12-16 2011-04-20 아이리텍 잉크 An Acquisition System and Method of High Quality Eye Images for Iris Recognition
US8472681B2 (en) 2009-06-15 2013-06-25 Honeywell International Inc. Iris and ocular recognition system using trace transforms
US8384774B2 (en) * 2010-02-15 2013-02-26 Eastman Kodak Company Glasses for viewing stereo images
KR101046459B1 (en) 2010-05-13 2011-07-04 아이리텍 잉크 An iris recognition apparatus and a method using multiple iris templates
US8948467B2 (en) * 2010-08-06 2015-02-03 Honeywell International Inc. Ocular and iris processing system and method
US9304319B2 (en) 2010-11-18 2016-04-05 Microsoft Technology Licensing, Llc Automatic focus improvement for augmented reality displays
JP6185844B2 (en) 2010-12-24 2017-08-30 マジック リープ, インコーポレイテッド Ergonomic head-mounted display device and optical system
US10156722B2 (en) 2010-12-24 2018-12-18 Magic Leap, Inc. Methods and systems for displaying stereoscopy with a freeform optical system with addressable focus for virtual and augmented reality
CN103635891B (en) 2011-05-06 2017-10-27 奇跃公司 The world is presented in a large amount of digital remotes simultaneously
KR20140059213A (en) * 2011-08-30 2014-05-15 마이크로소프트 코포레이션 Head mounted display with iris scan profiling
WO2013037050A1 (en) 2011-09-16 2013-03-21 Annidis Health Systems Corp. System and method for assessing retinal functionality and optical stimulator for use therein
EP2760363A4 (en) 2011-09-29 2015-06-24 Magic Leap Inc Tactile glove for human-computer interaction
US9286711B2 (en) * 2011-09-30 2016-03-15 Microsoft Technology Licensing, Llc Representing a location at a previous time period using an augmented reality display
US9345957B2 (en) 2011-09-30 2016-05-24 Microsoft Technology Licensing, Llc Enhancing a sport using an augmented reality display
BR112014010230A8 (en) 2011-10-28 2017-06-20 Magic Leap Inc system and method for augmented and virtual reality
US8929589B2 (en) 2011-11-07 2015-01-06 Eyefluence, Inc. Systems and methods for high-resolution gaze tracking
CA3024054C (en) 2011-11-23 2020-12-29 Magic Leap, Inc. Three dimensional virtual and augmented reality display system
US20130259322A1 (en) 2012-03-31 2013-10-03 Xiao Lin System And Method For Iris Image Analysis
WO2013152205A1 (en) 2012-04-05 2013-10-10 Augmented Vision Inc. Wide-field of view (fov) imaging devices with active foveation capability
US9456744B2 (en) 2012-05-11 2016-10-04 Digilens, Inc. Apparatus for eye tracking
CN104737061B (en) 2012-06-11 2018-01-16 奇跃公司 Use more depth plane three dimensional displays of the waveguided reflector arrays projector
US9671566B2 (en) * 2012-06-11 2017-06-06 Magic Leap, Inc. Planar waveguide apparatus with diffraction element(s) and system employing same
EP2895910A4 (en) 2012-09-11 2016-04-20 Magic Leap Inc Ergonomic head mounted display device and optical system
KR101417433B1 (en) 2012-11-27 2014-07-08 현대자동차주식회사 User identification apparatus using movement of pupil and method thereof
KR20150103723A (en) * 2013-01-03 2015-09-11 메타 컴퍼니 Extramissive spatial imaging digital eye glass for virtual or augmediated vision
US10151875B2 (en) 2013-01-15 2018-12-11 Magic Leap, Inc. Ultra-high resolution scanning fiber display
KR102387314B1 (en) 2013-03-11 2022-04-14 매직 립, 인코포레이티드 System and method for augmented and virtual reality
NZ751602A (en) 2013-03-15 2020-01-31 Magic Leap Inc Display system and method
PL2988655T3 (en) 2013-04-26 2023-08-14 Genentech, Inc. Contour integration perimetry vision test
US20140341441A1 (en) 2013-05-20 2014-11-20 Motorola Mobility Llc Wearable device user authentication
US8958608B2 (en) * 2013-06-04 2015-02-17 Ut-Battelle, Llc Frontal view reconstruction for iris recognition
US10262462B2 (en) 2014-04-18 2019-04-16 Magic Leap, Inc. Systems and methods for augmented and virtual reality
US9874749B2 (en) 2013-11-27 2018-01-23 Magic Leap, Inc. Virtual and augmented reality systems and methods
US9727129B2 (en) 2013-06-28 2017-08-08 Harman International Industries, Incorporated System and method for audio augmented reality
ITTV20130118A1 (en) 2013-07-29 2015-01-30 Sorridi Editore S R L COMPUTERIZED SYSTEM FOR THE DISTRIBUTION OF A MULTI-PLATFORM DIGITAL EDITORIAL PRODUCT AND RELATIVE METHOD.
CN105960193A (en) 2013-09-03 2016-09-21 托比股份公司 Portable eye tracking device
EP3058418B1 (en) 2013-10-16 2023-10-04 Magic Leap, Inc. Virtual or augmented reality headsets having adjustable interpupillary distance
TWI498769B (en) * 2013-11-18 2015-09-01 Quanta Comp Inc Head mounted display apparatus and login method thereof
KR102268462B1 (en) 2013-11-27 2021-06-22 매직 립, 인코포레이티드 Virtual and augmented reality systems and methods
US9857591B2 (en) 2014-05-30 2018-01-02 Magic Leap, Inc. Methods and system for creating focal planes in virtual and augmented reality
US9189686B2 (en) * 2013-12-23 2015-11-17 King Fahd University Of Petroleum And Minerals Apparatus and method for iris image analysis
EP3100098B8 (en) 2014-01-31 2022-10-05 Magic Leap, Inc. Multi-focal display system and method
EP4071537A1 (en) 2014-01-31 2022-10-12 Magic Leap, Inc. Multi-focal display system
US10203762B2 (en) 2014-03-11 2019-02-12 Magic Leap, Inc. Methods and systems for creating virtual and augmented reality
KR102227659B1 (en) * 2014-03-12 2021-03-15 삼성전자주식회사 System and method for displaying vertual image via head mounted device
US20150324568A1 (en) 2014-05-09 2015-11-12 Eyefluence, Inc. Systems and methods for using eye signals with secure mobile communications
USD759657S1 (en) 2014-05-19 2016-06-21 Microsoft Corporation Connector with illumination region
KR102193052B1 (en) 2014-05-30 2020-12-18 매직 립, 인코포레이티드 Methods and systems for generating virtual content display with a virtual or augmented reality apparatus
USD752529S1 (en) 2014-06-09 2016-03-29 Comcast Cable Communications, Llc Electronic housing with illuminated region
US20160034913A1 (en) 2014-07-30 2016-02-04 Hewlett-Packard Development Company, L.P. Selection of a frame for authentication
US9715781B2 (en) 2014-09-26 2017-07-25 Bally Gaming, Inc. System and method for automatic eye tracking calibration
CN104318209B (en) * 2014-10-13 2017-11-03 北京智谷睿拓技术服务有限公司 Iris image acquiring method and equipment
US9672341B2 (en) * 2014-10-30 2017-06-06 Delta ID Inc. Systems and methods for spoof detection in iris based biometric systems
US10019563B2 (en) 2014-12-05 2018-07-10 Sony Corporation Information processing apparatus and information processing method
EP3236338B1 (en) * 2014-12-17 2019-12-18 Sony Corporation Information processing apparatus, information processing method and program
WO2016133540A1 (en) * 2015-02-20 2016-08-25 Hewlett-Packard Development Company, L.P. Eye gaze authentication
US9495590B1 (en) * 2015-04-23 2016-11-15 Global Bionic Optics, Ltd. Extended depth-of-field biometric system
USD758367S1 (en) 2015-05-14 2016-06-07 Magic Leap, Inc. Virtual reality headset
US10474892B2 (en) * 2015-05-26 2019-11-12 Lg Electronics Inc. Mobile terminal and control method therefor
US9888843B2 (en) * 2015-06-03 2018-02-13 Microsoft Technology Licensing, Llc Capacitive sensors for determining eye gaze direction
US9984507B2 (en) 2015-11-19 2018-05-29 Oculus Vr, Llc Eye tracking for mitigating vergence and accommodation conflicts
CN107016270A (en) * 2015-12-01 2017-08-04 由田新技股份有限公司 Dynamic graphic eye movement authentication system and method combining face authentication or hand authentication
KR102567431B1 (en) 2016-01-19 2023-08-14 매직 립, 인코포레이티드 Eye image collection, selection, and combination
USD805734S1 (en) 2016-03-04 2017-12-26 Nike, Inc. Shirt
USD794288S1 (en) 2016-03-11 2017-08-15 Nike, Inc. Shoe with illuminable sole light sequence
US10002311B1 (en) * 2017-02-10 2018-06-19 International Business Machines Corporation Generating an enriched knowledge base from annotated images
US11211775B2 (en) 2019-08-14 2021-12-28 Subcom, Llc Redundancy improvement in semiconductor-based optical communication systems

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7486806B2 (en) * 2002-09-13 2009-02-03 Panasonic Corporation Iris encoding method, individual authentication method, iris code registration device, iris authentication device, and iris authentication program

Also Published As

Publication number Publication date
US20170206401A1 (en) 2017-07-20
US20220066553A1 (en) 2022-03-03
IL260603B (en) 2020-07-30
KR102567431B1 (en) 2023-08-14
US20170205875A1 (en) 2017-07-20
JP2021184289A (en) 2021-12-02
US11579694B2 (en) 2023-02-14
AU2022202543A1 (en) 2022-05-12
US11231775B2 (en) 2022-01-25
IL272891B1 (en) 2023-05-01
KR20230003464A (en) 2023-01-05
CN108780229A (en) 2018-11-09
NZ744400A (en) 2019-11-29
JP7405800B2 (en) 2023-12-26
AU2017208994B2 (en) 2022-01-20
KR20180104677A (en) 2018-09-21
US20200097080A1 (en) 2020-03-26
IL272891A (en) 2020-04-30
AU2017208994A1 (en) 2018-08-02
JP2020102259A (en) 2020-07-02
WO2017127366A1 (en) 2017-07-27
US20170206412A1 (en) 2017-07-20
IL272891B2 (en) 2023-09-01
KR102483345B1 (en) 2022-12-29
CA3011637A1 (en) 2017-07-27
US10466778B2 (en) 2019-11-05
JP6824285B2 (en) 2021-02-03
EP3405829A1 (en) 2018-11-28
EP3405829A4 (en) 2019-09-18
US10831264B2 (en) 2020-11-10
CN114205574A (en) 2022-03-18
JP2019502223A (en) 2019-01-24
JP6960494B2 (en) 2021-11-05
US11209898B2 (en) 2021-12-28

Similar Documents

Publication Publication Date Title
CN108780229B (en) Eye image acquisition, selection and combination
JP7252407B2 (en) Blue light regulation for biometric security
JP7436600B2 (en) Personalized neural network for eye tracking
JP7431157B2 (en) Improved pose determination for display devices
KR102433833B1 (en) Eye Imaging with Off-Axis Imager
US11067805B2 (en) Systems and methods for operating a display system based on user perceptibility
US20170237974A1 (en) Multi-depth plane display system with reduced switching between depth planes
CN115398894A (en) Virtual object motion velocity profiles for virtual and augmented reality display systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant