NZ736574B2 - Methods for biometric user recognition - Google Patents
Methods for biometric user recognition Download PDFInfo
- Publication number
- NZ736574B2 NZ736574B2 NZ736574A NZ73657416A NZ736574B2 NZ 736574 B2 NZ736574 B2 NZ 736574B2 NZ 736574 A NZ736574 A NZ 736574A NZ 73657416 A NZ73657416 A NZ 73657416A NZ 736574 B2 NZ736574 B2 NZ 736574B2
- Authority
- NZ
- New Zealand
- Prior art keywords
- user
- processor
- image
- shape
- analyzing
- Prior art date
Links
- 230000003190 augmentative Effects 0.000 claims abstract description 22
- 230000004424 eye movement Effects 0.000 claims description 19
- 230000001629 suppression Effects 0.000 claims description 7
- 210000004709 Eyebrows Anatomy 0.000 claims description 5
- 238000000034 method Methods 0.000 abstract description 15
- 230000000694 effects Effects 0.000 abstract description 5
- 230000005012 migration Effects 0.000 abstract description 3
- 230000036541 health Effects 0.000 abstract description 2
- 210000000554 Iris Anatomy 0.000 description 34
- 230000001537 neural Effects 0.000 description 17
- 210000003128 Head Anatomy 0.000 description 12
- 229930002945 all-trans-retinaldehyde Natural products 0.000 description 9
- 230000002207 retinal Effects 0.000 description 9
- 235000020945 retinal Nutrition 0.000 description 9
- 239000011604 retinal Substances 0.000 description 9
- 150000002500 ions Chemical class 0.000 description 7
- 210000001525 Retina Anatomy 0.000 description 6
- 238000004891 communication Methods 0.000 description 6
- 238000005259 measurement Methods 0.000 description 6
- 238000009877 rendering Methods 0.000 description 6
- 230000004256 retinal image Effects 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000006011 modification reaction Methods 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 210000004204 Blood Vessels Anatomy 0.000 description 2
- 210000000613 Ear Canal Anatomy 0.000 description 2
- 210000003205 Muscles Anatomy 0.000 description 2
- 210000003491 Skin Anatomy 0.000 description 2
- 230000003028 elevating Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000003287 optical Effects 0.000 description 2
- 230000000737 periodic Effects 0.000 description 2
- 210000003986 Cell, Retinal Photoreceptor Anatomy 0.000 description 1
- 206010014970 Ephelide Diseases 0.000 description 1
- 210000001061 Forehead Anatomy 0.000 description 1
- 206010019233 Headache Diseases 0.000 description 1
- 210000001624 Hip Anatomy 0.000 description 1
- 240000005837 Iris setosa Species 0.000 description 1
- 241000229754 Iva xanthiifolia Species 0.000 description 1
- 206010028813 Nausea Diseases 0.000 description 1
- 210000001747 Pupil Anatomy 0.000 description 1
- 210000000587 Skeletal Muscle Fibers Anatomy 0.000 description 1
- 229940035295 Ting Drugs 0.000 description 1
- 210000003462 Veins Anatomy 0.000 description 1
- 230000002457 bidirectional Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000002596 correlated Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 231100000869 headache Toxicity 0.000 description 1
- 238000003331 infrared imaging Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000000116 mitigating Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003333 near-infrared imaging Methods 0.000 description 1
- 230000002085 persistent Effects 0.000 description 1
- 230000029058 respiratory gaseous exchange Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000000284 resting Effects 0.000 description 1
- 230000000630 rising Effects 0.000 description 1
- 238000007493 shaping process Methods 0.000 description 1
- 231100000486 side effect Toxicity 0.000 description 1
- 230000003068 static Effects 0.000 description 1
- 230000001131 transforming Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/45—Structures or tools for the administration of authentication
-
- G06K2009/00939—
-
- G06K9/00597—
-
- G06K9/00617—
-
- G06K9/4628—
-
- G06K9/6272—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
Abstract
The migration of important activities, such as financial and health related activities, from the physical world into connected electronic (“virtual”) spaces has the potential to improve human lives. Traditional transaction systems (financial or otherwise) typically require users to physically carry or mentally recall some form of monetary token and in some cases, identification and authentication to partake in business transactions. In the context of augmented reality devices, these steps are redundant and unnecessary. The augmented reality devices may be configured to allow users whose identities have been pre-identified or pre-authenticated to seamlessly perform many types of transactions without requiring the user to perform the onerous procedures described above. A method of identifying a user of an augmented reality (AR) system includes a camera of the AR system capturing an image. The method also includes a processor of the AR system analyzing the image. The method further includes the processor of the AR system identifying a shape based on analyzing the image. Moreover, the method includes the processor of the AR system analyzing the shape. In addition, the method includes the processor of the AR system identifying a general object category based on analyzing the shape. The method also includes the processor of the AR system identifying a narrow object category by comparing the shape with a characteristic based on the general object category. The method further includes the processor of the AR system generating a classification decision based on the narrow object category. The characteristic is from a known potentially confusing mismatched individual. or mentally recall some form of monetary token and in some cases, identification and authentication to partake in business transactions. In the context of augmented reality devices, these steps are redundant and unnecessary. The augmented reality devices may be configured to allow users whose identities have been pre-identified or pre-authenticated to seamlessly perform many types of transactions without requiring the user to perform the onerous procedures described above. A method of identifying a user of an augmented reality (AR) system includes a camera of the AR system capturing an image. The method also includes a processor of the AR system analyzing the image. The method further includes the processor of the AR system identifying a shape based on analyzing the image. Moreover, the method includes the processor of the AR system analyzing the shape. In addition, the method includes the processor of the AR system identifying a general object category based on analyzing the shape. The method also includes the processor of the AR system identifying a narrow object category by comparing the shape with a characteristic based on the general object category. The method further includes the processor of the AR system generating a classification decision based on the narrow object category. The characteristic is from a known potentially confusing mismatched individual.
Description
METHODS FOR BIOMETRIC USER ITION
Background
The migration of important activities, such as financial and health related
activities, from the physical world into connected electronic (“virtual”) spaces has the
potential to improve human lives. However, this migration of important activities also
provides new opportunities for malfeasance through identity and information theft.
To elaborate, traditional transaction systems (financial or otherwise) typically
e users to ally carry or mentally recall some form of monetary token (e.g., cash,
check, credit card, etc.) and in some cases, identification (e.g., driver’s license, etc.) and
authentication (e.g., signature, pin code, etc.) to partake in business transactions. er a
user walking into a department store: to make any kind of purchase, the user typically picks
up the item(s), places the item in a cart, walks over to the register, waits in line for the
r, waits for the cashier to scan a number of items, retrieves a credit card, provides
identification, signs the credit card receipt, and stores the t for a future return of the
item(s). With traditional ctions s, these steps, although necessary, are timeconsuming
and cient. I n some cases, these steps discourage or prohibit a user from
making a purchase (e.g., the user does not have the monetary token on their person or the
identification card on their person, etc.) However, in the context of augmented reality
(“AR”) devices, these steps are redundant and unnecessary. In one or more embodiments, the
AR devices may be configured to allow users whose identities have been pre-identified or
pre-authenticated to seamlessly perform many types of transactions (e.g., financial) without
requiring the user to perform the onerous procedures described above.
Accordingly, the s, methods and systems for recognizing users using
biometric data described and claimed herein can facilitate important electronic transactions
while mitigating the risks (e.g., security) ated with those transactions.
Summary
In one embodiment directed to a user identification system, the system
includes an image recognition network to analyze image data and generate shape data based
on the image data. The system also includes a generalist network to e the shape data
and te general category data based on the shape data. The system further includes a
specialist network to compare the general category data with a characteristic to generate
narrow category data. Moreover, the system includes a classifier layer including a ity
of nodes to represent a classification decision based on the narrow category data.
In one or more embodiments, the system also includes a back propagation
neural network including a plurality of . The back propagation neural network may
also include error suppression and learning elevation.
In one or more embodiments, the system also includes an ASIC encoded with
the image ition network. The specialist network may include a back ation
network including a plurality of layers. The system may also include a tuning layer to
modify the l category data based on user eye movements.
In another ment directed to a method of identifying a user of a ,
the method includes analyzing image data and generating shape data based on the image data.
The method also includes analyzing the shape data and generating l category data
based on the shape data. The method further includes generating narrow category data by
comparing the general category data with a characteristic. Moreover, the method includes
generating a classification decision based on the narrow category data, wherein the
characteristic is from a known potentially confusing mismatched individual.
In one or more embodiments, the method also includes identifying an error in
a piece of data. The method may also include suppressing the piece of data in which the error
is identified. ing the image data may include scanning a plurality of pixel of the
image data. The image data may correspond to an eye of the user.
In one or more embodiments, the characteristic is from a known potentially
confusing mismatched individual. The characteristic may be selected from the group
consisting of eyebrow shape and eye shape. The method may also include generating a
network of characteristics, where each respective characteristic of the network is associated
with a potentially confusing ched individual in a database. The network of
characteristics may be generated when the system is first calibrated for the user.
In one or more embodiments, the method also es tracking the user’s eye
movements over time. The method may also e modifying the general category data
based on the eye movements of the user before comparing the general category data with the
limitation. The method may also include ing the general category data to conform to a
ce resulting from the eye movements of the user.
In still another embodiment directed to a computer program product embodied
in a non-transitory computer readable medium, the computer readable medium having stored
thereon a sequence of instructions which, when executed by a processor causes the processor
to execute a method for identifying a user of a system, the method includes analyzing image
data and generating shape data based on the image data. The method also includes analyzing
the shape data and ting general category data based on the shape data. The method
further includes generating narrow category data by ing the general category data with
a teristic. Moreover, the method includes generating a classification decision based on
the narrow category data.
[0011A] In another embodiment there is provided a method of identifying a user of a
system, sing: analyzing image data; generating shape data based on the image data;
analyzing the shape data; generating l category data based on the shape data;
generating narrow category data by comparing the general category data with a characteristic;
and generating a classification decision based on the narrow category data, n the
characteristic is selected from the group consisting of eyebrow shape and eye shape.
[0011B] In another embodiment there is provided a method of identifying a user of a
system, comprising: ing image data; generating shape data based on the image data;
analyzing the shape data; generating general category data based on the shape data;
generating narrow category data by comparing the general ry data with a characteristic;
and generating a classification decision based on the narrow category data, further
comprising tracking the user's eye movements over time.
[0011C] In another embodiment there is ed a method of identifying a user of an
augmented reality (AR) system, comprising: a camera of the AR system capturing an image;
a sor of the AR system analyzing the image; the processor of the AR system
identifying a shape based on analyzing the image; the processor of the AR system ing
the shape; the processor of the AR system fying a general object category based on
analyzing the shape; the processor of the AR system identifying a narrow object category by
comparing the shape with a characteristic based on the general object ry; and the
processor of the AR system generating a classification decision based on the narrow object
category. The characteristic is from a known potentially confusing mismatched dual.
Brief Description of the Drawings
The drawings illustrate the design and utility of various embodiments of the
invention. It should be noted that the figures are not drawn to scale and that elements of
similar structures or functions are represented by like reference numerals throughout the
figures. In order to better appreciate how to obtain the above-recited and other advantages
and objects of various embodiments of the invention, a more detailed description of the
ion y described above will be rendered by reference to specific embodiments
thereof, which are illustrated in the accompanying gs. Understanding that these
drawings depict only typical embodiments of the invention and are not therefore to be
ered limiting of its scope, the invention will be described and explained with additional
specificity and detail through the use of the anying drawings in which:
Figures 1A to 1D and 2A to 2D are schematic views of augmented reality/user
fication systems according to various embodiments;
Figure 3 is a detailed schematic view of an augmented reality/user
identification system according to another embodiment;
Figure 4 is a schematic view of a user wearing an augmented reality/user
identification system according to still another embodiment;
Figure 5 is a schematic view of a user’s eye, including an iris template
according to one embodiment;
Figure 6 is an exemplary image of a user’s retina according to r
ment;
s 7 and 8 are diagrams depicting neural networks ing to two
embodiments;
Figure 9 is a diagram depicting a feature vector according to anther
ment;
Figures 10 and 11 are flow charts depicting methods for identifying a user
according to two embodiments.
Detailed Description
Various embodiments of the invention are directed to methods, systems, and
articles of manufacture for implementing a biometric user identification system (e.g., for use
with augmented reality systems) in a single embodiment or in multiple embodiments. Other
s, features, and advantages of the invention are described in the detailed description,
s, and claims.
Various embodiments will now be described in detail with reference to the
drawings, which are provided as illustrative examples of the invention so as to enable those
skilled in the art to ce the invention. y, the figures and the examples below are
not meant to limit the scope of the invention. Where certain elements of the invention may
be partially or fully implemented using known components (or methods or processes), only
those ns of such known components (or methods or processes) that are necessary for an
understanding of the invention will be bed, and the detailed descriptions of other
portions of such known components (or methods or processes) will be omitted so as not to
obscure the invention. Further, various embodiments ass present and future known
equivalents to the components referred to herein by way of illustration.
Augmented Reality and User Identification Systems
Various embodiments of augmented reality display systems are known. The
user recognition device may be implemented independently of AR systems, but many
embodiments below are described in relation to AR systems for illustrative purposes only.
Disclosed are devices, methods and s for recognizing users of various
er systems. In one embodiment, the computer system may be a head-mounted system
configured to facilitate user interaction with various other computer systems (e.g., financial
computer systems). In other embodiments, the computer system may be a stationary device
(e.g., a nt terminal or an ATM) configured to facilitate user financial transactions.
Various embodiments will be described below with respect to user ition in the context
of user financial transactions utilizing an AR system (e.g., ounted), but it should be
appreciated that the embodiments disclosed herein may be used independently of any existing
and/or known AR or financial transaction systems.
For instance, when the user of an AR system attempts to complete a
commercial transaction using the AR system (e.g., purchase an item from an online retailer
using funds from an online checking account), the system must first establish the user’s
identity before proceeding with the commercial transaction. The input for this user ty
determination can be images of the user ted by the AR system over time. An iris
pattern can be used to identify the user. However, user identification is not limited to iris
patterns, and may include other unique attributes or characteristics of users.
The user identification devices and s described herein utilize one or
more back propagation neural networks to facilitate analysis of user attributes to determine
the identity of a user/wearer. Machine learning s can efficiently render identification
decisions (e.g., Sam or not Sam) using back propagation neural networks. The neural
networks described herein include additional layers to more accurately (i.e., closer to “the
truth”) and precisely (i.e., more able) render identification ons while minimizing
computing/processing requirements (e.g., processor cycles and time).
Referring now to Figures 1A-1D, some general AR system component options
are rated according to various embodiments. It should be appreciated that although the
embodiments of Figures 1A-1D illustrate head-mounted displays, the same components may
be incorporated in stationary computer systems as well, and Figures 1A-1D should not be
seen as limiting.
As shown in Figure 1A, a head-mounted device user 60 is depicted wearing a
frame 64 structure coupled to a display system 62 positioned in front of the eyes of the user
60. The frame 64 may be permanently or temporarily coupled to one or more user
identification specific sub systems depending on the required level of ty. Some
embodiments may be built specifically for user identification applications, and other
embodiments may be general AR systems that are also e of user identification. In
either case, the ing describes possible components of the user identification system or
an AR system used for user identification.
A speaker 66 may be coupled to the frame 64 in the depicted configuration
and positioned adjacent the ear canal of the user 60. In an alternative embodiment, another
speaker (not shown) is positioned adjacent the other ear canal of the user 60 to provide for
stereo/shapeable sound l. In one or more embodiments, the user identification device
may have a display 62 that is operatively coupled, such as by a wired lead or wireless
connectivity, to a local processing and data module 70, which may be mounted in a variety of
configurations, such as fixedly attached to the frame 64, fixedly attached to a helmet or hat 80
as shown in the embodiment depicted in Figure 1B, embedded in headphones, removably
attached to the torso 82 of the user 60 in a backpack-style configuration as shown in the
embodiment of Figure 1C, or removably attached to the hip 84 of the user 60 in a beltcoupling
style configuration as shown in the embodiment of Figure 1D.
The local processing and data module 70 may comprise a power-efficient
processor or controller, as well as l memory, such as flash memory, both of which may
be utilized to assist in the sing, caching, and storage of data. The data may be captured
from sensors which may be operatively coupled to the frame 64, such as image capture
devices (such as s), microphones, inertial measurement units, accelerometers,
compasses, GPS units, radio s, and/or gyros. Alternatively or additionally, the data
may be acquired and/or processed using the remote processing module 72 and/or remote data
repository 74, ly for passage to the y 62 after such processing or retrieval. The
local processing and data module 70 may be operatively coupled 76, 78, such as via a wired
or wireless communication links, to the remote processing module 72 and the remote data
repository 74 such that these remote modules 72, 74 are operatively coupled to each other and
available as ces to the local processing and data module 70.
In one embodiment, the remote sing module 72 may comprise one or
more relatively powerful processors or controllers configured to analyze and process data
and/or image information. In one embodiment, the remote data repository 74 may comprise a
relatively scale digital data storage facility, which may be available through the internet
or other networking configuration in a “cloud” resource configuration. In one embodiment,
all data is stored and all computation is performed in the local processing and data module,
ng fully autonomous use from any remote modules.
More ent to the current disclosures, user identification devices (or AR
s having user identification applications) similar to those described in Figures 1A-1D
provide unique access to a user’s eyes. Given that the user identification/AR device interacts
crucially with the user’s eye to allow the user to perceive 3-D virtual content, and in many
embodiments, tracks s biometrics related to the user’s eyes (e.g., iris ns, eye
vergence, eye motion, patterns of cones and rods, patterns of eye movements, etc.), the
resultant tracked data may be advantageously used in user identification applications. Thus,
this unprecedented access to the user’s eyes naturally lends itself to various user
identification applications.
In one or more embodiments, the augmented reality display system may be
used as a user-worn user identification device or system. Such user identification devices and
systems capture images of a user’s eye and track a user’s eye movements to obtain data for
user identification. Traditionally, user identification devices require a user to remain
stationary because the devices to which the user is temporarily attached are stationary.
Typically, the use is ed to the user identification instrument or device (e.g., face on a
face resting component of user identification device with head d, and/or finger in a
fingerprint reading device, etc.) until the device has completed the data acquisition. Thus,
current user identification ches have a number of limitations.
In addition to restricting user movement during the user identification data
acquisition, the traditional approaches may result in image capture errors, leading to user
identification errors. Further, existing image (e.g., iris or fingerprint) analysis algorithms can
result in user identification errors. For instance, most existing image analysis algorithms are
designed and/or calibrated to balance user identification accuracy and ion with
computer system requirements. Therefore, when a third party shares a sufficient amount of
user characteristics with a user, an existing image analysis algorithm may mistakenly identify
the third party as the user.
In one or more ments, a head-worn AR system including a user
identification device similar to the ones shown in Figures 1A-1D may be used to initially and
continuously identify a user before providing access to secure features of the AR system
(described below). In one or more embodiments, an AR display system may be used as a
orn, user identification . It should be appreciated that while a number of the
embodiments described below may be implemented in orn systems, other
embodiments may be implemented in stationary devices. For rative purposes, the
disclosure will mainly focus on head-worn user identification devices and particularly AR
devices, but it should be appreciated that the same principles may be applied to non-headworn
and non-AR embodiments as well.
In one or more embodiments, the AR display device may be used as a userworn
user identification device. The user-worn user identification device is lly fitted
for a ular user’s head, and the optical ents are aligned to the user’s eyes. These
configuration steps may be used in order to ensure that the user is provided with an m
augmented reality experience without causing any physiological side-effects, such as
headaches, nausea, discomfort, etc. Thus, in one or more embodiments, the user-worn user
identification device is configured (both physically and digitally) for each individual user,
and a set of programs may be calibrated specifically for the user. In other scenarios, a loose
fitting AR device may be used comfortably by a variety of users. For example, in some
embodiments, the user worn user identification device knows a ce between the user’s
eyes, a distance between the head worn display and the user’s eyes, and a ure of the
user’s forehead. All of these measurements may be used to provide a head-worn display
system ized to fit a given user. In other embodiments, such measurements may not be
ary in order to perform the user identification functions.
For example, ing to Figures 2A-2D, the user identification device may be
customized for each user. The user’s head shape 402 may be taken into account when fitting
the head-mounted user-worn user identification system, in one or more embodiments, as
shown in Figure 2A. Similarly, the eye components 404 (e.g., optics, structure for the optics,
etc.) may be d or adjusted for the user’s comfort both horizontally and vertically, or
rotated for the user’s comfort, as shown in Figure 2B. In one or more embodiments, as
shown Figure 2C, a rotation point of the head set with respect to the user’s head may be
adjusted based on the structure of the user’s head. Similarly, the inter-pupillary distance
(IPD) (i.e., the distance between the user’s eyes) may be compensated for, as shown in Figure
Advantageously, in the context of user-worn user identification devices, the
customization of the head-worn devices for each user is advantageous e a customized
system already has access to a set of measurements about the user’s physical features (e.g.,
eye size, head size, ce between eyes, etc.), and other data that may be used in user
identification.
In addition to the various measurements and calibrations med on the
user, the user-worn user identification device may be configured to track a set of biometric
data about the user. For example, the system may track eye movements, eye movement
patterns, blinking patterns, eye vergence, fatigue parameters, changes in eye color, changes in
focal distance, and many other parameters, which may be used in providing an optical
augmented reality ence to the user. In the case of AR devices used for user
identification ations, it should be appreciated that some of the above-mentioned
embodiments may be part of generically-available AR devices, and other features (described
herein) may be incorporated for particular user identification applications.
Referring now to Figure 3, the various components of an example user-worn
user identification display device will be described. It should be appreciated that other
embodiments may have additional components depending on the application (e.g., a
particular user identification procedure) for which the system is used. heless, Figure 3
provides a basic idea of the various components, and the types of biometric data that may be
collected and stored h the user-worn user identification device or AR device. Figure 3
shows a simplified version of the head-mounted user identification device 62 in the block
diagram to the right for rative purposes.
Referring to Figure 3, one ment of a suitable user display device 62 is
shown, comprising a display lens 106 which may be mounted to a user’s head or eyes by a
housing or frame 108. The user display device 62 is an AR system that is configured to
perform a y of functions, including identify its wearer/user. The display lens 106 may
comprise one or more transparent mirrors positioned by the housing 84 in front of the user’s
eyes 20 and ured to bounce projected light 38 into the eyes 20 and facilitate beam
shaping, while also allowing for transmission of at least some light from the local
nment. In the depicted embodiment, two wide-field-of-view machine vision cameras
16 are coupled to the housing 108 to image the environment around the user; in one
embodiment these cameras 16 are dual capture visible light/infrared light cameras.
The ed embodiment also comprises a pair of scanned-laser shapedwavefront
(i.e., for depth) light projector modules 18 with display mirrors and optics
configured to project light 38 into the eyes 20 as shown. The depicted embodiment also
comprises two miniature infrared cameras 24 paired with infrared light sources 26 (such as
light ng diodes or “LEDs”), which are configured to track the eyes 20 of the user to
support rendering and user input. These infrared cameras 24 are also configured to
continuously and dynamically capture images of the user’s eyes, especially the iris thereof,
which can be utilized in user identification.
The system 62 further features a sensor assembly 39, which may comprise X,
Y, and Z axis rometer capability as well as a magnetic compass and X, Y, and Z axis
gyro capability, preferably providing data at a vely high frequency, such as 200 Hz. An
exemplary sensor assembly 39 is an inertial measurement unit (“IMU”). The depicted system
62 also comprises a head pose processor 36 (“image pose sor”), such as an ASIC
(application specific integrated circuit), FPGA (field programmable gate array), and/or ARM
processor (advanced reduced-instruction-set machine), which may be configured to calculate
real or near-real time user head pose from wide field of view image ation output from
the capture devices 16.
Also shown is another processor 32 (“sensor pose processor”) configured to
execute digital and/or analog processing to derive pose from the gyro, compass, and/or
accelerometer data from the sensor assembly 39. The depicted embodiment also features a
GPS (global positioning system) subsystem 37 to assist with pose and positioning. In
on, the GPS may further provide cloud-based information about the user’s location.
This information may be used for user identification purposes. For example, if the user
identification thm can narrow the detected user characteristics to two potential user
identities, a user’s t and ical location data may be used to ate one of the
potential user identities.
Finally, the depicted embodiment ses a rendering engine 34 which may
feature hardware running a software program configured to provide rendering information
local to the user to facilitate operation of the scanners and imaging into the eyes of the user,
for the user’s view of the world. The rendering engine 34 is operatively coupled 94, 100,
102, 104, 105 (i.e., via wired or wireless connectivity) to the image pose processor 36, the
eye tracking cameras 24, the projecting subsystem 18, and the sensor pose processor 32 such
that ed light is projected using a scanned laser arrangement 18 in a manner similar to a
retinal scanning display. The wavefront of the projected light beam 38 may be bent or
focused to coincide with a desired focal distance of the projected light.
The miniature infrared eye tracking cameras 24 may be utilized to track the
eyes to support rendering and user input (e.g., where the user is looking, what depth he is
focusing, etc.) As discussed below, eye verge may be utilized to estimate a depth of a user’s
focus. The GPS 37, and the gyros, compasses and accelerometers in the sensor assembly 39
may be ed to provide coarse and/or fast pose estimates. The camera 16 images and
sensor pose information, in conjunction with data from an associated cloud computing
resource, may be ed to map the local world and share user views with a virtual or
augmented reality community and/or user identification system.
While much of the hardware in the display system 62 featured in Figure 3 is
depicted directly coupled to the housing 108 which is adjacent the display 106 and eyes 20 of
the user, the hardware components depicted may be mounted to or housed within other
components, such as a belt-mounted component, as shown, for example, in Figure 1D.
In one embodiment, all of the components of the system 62 featured in Figure
3 are directly coupled to the y housing 108 except for the image pose processor 36,
sensor pose processor 32, and rendering engine 34, and communication between the latter
three and the remaining ents of the system 62 may be by wireless communication,
such as wideband, or wired communication. The ed g 108 preferably is
head-mounted and wearable by the user. It may also feature speakers, such as those which
may be inserted into the ears of a user and utilized to provide sound to the user.
Regarding the projection of light 38 into the eyes 20 of the user, in one
embodiment the mini cameras 24 may be utilized to ine the point in space to which the
centers of a user’s eyes 20 are geometrically verged, which, in general, coincides with a
position of focus, or “depth of focus,” of the eyes 20. The focal distance of the projected
images may take on a finite number of depths, or may be infinitely varying to facilitate
projection of 3-D images for viewing by the user. The mini cameras 24 may be utilized for
eye tracking, and re may be configured to pick up not only vergence geometry but also
focus location cues to serve as user inputs.
Having described the general components of the r identification
system, additional components and/or features pertinent to user identification will be
discussed below. It should be appreciated that some of the features described below will be
common to user identification devices or most AR systems used for user fication
purposes, while others will require onal components for user identification purposes.
User Identification
The subject augmented reality systems are ideally suited for assisting users
with various types of important transactions, financial and otherwise, because they are very
well suited to identifying, authenticating, localizing, and even determining a gaze of, a user.
Identifying a user from acking/eye-imaging
The subject AR system 62 generally needs to know where a user’s eyes are
gazing (or “looking”) and where the user’s eyes are focused. Thus in various embodiments, a
head mounted display (“HMD”) component features one or more cameras 24 that are
ed to capture image information pertinent to the user’s eyes 20. In the embodiment
depicted in Figure 4, each eye 20 of the user may have a camera 24 focused on it, along with
three or more LEDs (not shown) with known offset distances to the camera 24, to induce
glints upon the surfaces of the eyes. In one embodiment, the LEDs are directly below the
eyes 20.
The presence of three or more LEDs with known offsets to each camera 24
allows determination of the distance from the camera 24 to each glint point in 3-D space by
triangulation. Using at least 3 glint points and an approximately spherical model of the eye
, the system 62 can deduce the curvature of the eye 20. With known 3-D offset and
orientation to the eye 20, the system 62 can form exact (e.g., images) or abstract (e.g.,
gradients or other features) templates of the iris or retina for use to identify the user. In other
embodiments, other characteristics of the eye 20, such as the pattern of veins in and over the
eye 20, may also be used (e.g., along with the iris or retinal templates) to identify the user.
a. Iris image identification. In one embodiment, the pattern of muscle
fibers in the iris of an eye 20 forms a stable unique pattern for each person, including
freckles, furrows and rings. s iris features may be more readily captured using infrared
or near-infrared imaging compared to visible light imaging. The system 62 can transform the
captured iris es into an fication code 68 in many different ways. The goal is to
extract a sufficiently rich texture from the eye 20. With ient degrees of freedom in the
collected data, the system 62 can theoretically identify a unique user among the seven billion
living humans. Since the system 62 includes s 24 directed at the eyes 20 of the user
from below or from the side, the system code 68 would not need to be rotationally invariant.
Figure 5 shows an example code 68 from an iris for reference.
For e, using the system camera 26 below the user eye 20 the capture
images and several LEDs to provide 3-D depth information, the system 62 forms a template
code 68, normalized for pupil diameter and its 3-D position. The system 62 can capture a
series of template codes 68 over time from several different views as the user is registering
with the device 62. This series of template codes 68 can be combined to form a single
template code 68 for analysis.
b. l image identification. In another embodiment, the HMD
comprises a ction display driven by a laser scanner steered by a ble fiber optic
cable. This cable can also be utilized to visualize the interior of the eye and image the ,
which has a unique pattern of visual receptors (rods and cones) and blood vessels. These also
form a pattern unique to each individual, and can be used to ly identify each person.
Figure 6 illustrates an image of the retina, which may be transformed into a
n by many conventional methods. For instance, the pattern of dark and light blood
vessels is unique and can be transformed into a “dark-light” code by standard techniques such
as apply gradient operators to the retinal image and counting high low transitions in a
standardized grid centered at the center of the retina.
Thus the subject systems 62 may be utilized to identify the user with enhanced
accuracy and precision by comparing user characteristics captured or detected by the system
62 with known baseline user characteristics for an authorized user of the system 62. These
user characteristics may include iris and retinal images as described above.
The user characteristics may also include the curvature/size of the eye 20,
which s in fying the user because eyes of different people have similar, but not
exactly the same, curvature and/or size. Utilizing eye curvature and/or size also prevents
spoofing of iris and retinal images with flat duplicates. In one embodiment described above,
the curvature of the user’s eye 20 can be calculated from imaged glints.
The user characteristics may r e temporal information. Temporal
information can be collected while the user is subjected to stress (e.g., an announcement that
their identity is being challenged). Temporal information includes the heart rate, r the
user’s eyes are producing a water film, whether the eyes verge and focus together, breathing
patterns, blink rate, pulse rate, etc.
Moreover, the user characteristics may include correlated information. F or
example, the system 62 can correlate images of the environment with expected eye
movement patterns. The system 62 can also determine r the user is seeing the same
expected scene that correlates to the location as derived from GPS, Wi-Fi signals and/or maps
of the environment. For example, if the user is supposedly at home (from GPS and Wi-Fi
signals), the system should detect expected pose correct scenes inside of the home.
In addition, the user characteristics may e hyperspectral and/or
skin/muscle conductance, which may be used to identify the user (by comparing with known
baseline characteristics). Hyperspectral and/or skin/muscle conductance can also be used to
determine that the user is a living .
The user characteristics may also include eye movement patterns because the
subject augmented reality systems configurations are designed to be worn persistently. Eye
movement patterns can be compared with known baseline characteristics to identify (or help
to identify) the user.
In other embodiments, the system can use a plurality of eye characteristics
(e.g., iris and retinal ns, eye shape, eye brow shape, eye lash pattern, eye size and
curvature, etc.) to identify the user. By using a plurality of characteristics, such embodiments
can identify users from lower tion images when compared to systems that fy users
using only a single eye characteristic (e.g., iris pattern).
The input to user the identification system (e.g., the deep biometric
identification neural networks described herein) may be an image of an eye (or another
portion of a user), or a plurality of images of the eye acquired over time (e.g., a . In
some embodiments, the network acquires more information from a plurality of images of the
same eye compared to a single image. In some embodiments, some or all of the plurality of
images are pre-processed before being analyzed to increase the ive resolution of the
images using stabilized compositing of le images over time as is well known to those
versed in the art.
The AR/user fication system can also be used to periodically identify the
user and/or confirm that the system has not been d from a user’s head.
The above-described AR/user identification system provides an extremely
secure form of user identification. In other words, the system may be utilized to determine
who the user is with relatively high degrees of accuracy and ion. Since the system can
be utilized to know who the user is with an unusually high degree of certainty, and on a
persistent basis (using periodic monitoring), it can be utilized to enable various secure
financial transactions without the need for separate logins.
Various computing paradigms can be utilized to compare captured or detected
user characteristics with known baseline user characteristics for an authorized user to
efficiently identify a user with accuracy and precision while minimizing
computing/processing requirements.
Neural Networks
Figure 7 rates a back propagation neural network 200 according to one
embodiment. The network 200 includes a plurality of nodes 202 ted by a plurality of
connectors 204 that represent the output of one node 202, which forms the input for another
node 202. e the network 200 is a back propagation neural network, the connectors
204 are bidirectional, in that each node 202 can provide input to the nodes 202 in the layers
on top of and below that node 202.
The network 200 includes six layers starting with first layer 206a and passing
through (“rising up to”) sixth (“classifier”) layer 206f. The network 200 is configured to
derive a classification (e.g., Sam/not Sam) decision based on detected user characteristics. In
some embodiments, the classification decision is a n on. The first layer 206a is
ured to scan the pixels of the captured image 212 (e.g., the image of the user’s eye and
particularly the user’s iris). The information from the first layer 206a is processed by the
nodes 202 therein and passed onto the nodes 202 in the second layer 206b.
The nodes 202 in the second layer 206b s the information from the first
layer 206a, including error checking. If the second layer 206b detects errors in the
information from first layer 206a, the erroneous information is suppressed in the second layer
206b. If the second layer 206b confirms the information from the first layer 206a, the
med information is elevated/strengthened (e.g., weighted more heavily for the next
layer). This error suppressing/information elevating process is repeated between the second
and third layers 206b, 206c. The first three layers 206a, 206b, 206c form an image
processing subnetwork 208, which is configured to recognize/identify basic shapes found in
the world (e.g., a triangle, an edge, a flat surface, etc.) In some ments, the image
processing subnetwork 208 is fixed code that can be burned onto an application-specific
integrated circuit (“ASIC”).
The network 200 also includes fourth and fifth layers 206d, 206e, which are
configured to receive information from the first three layers 206a, 206b, 206c and from each
other. The fourth and fifth layers 206d, 206e form a generalist subnetwork 210, which is
configured to identify objects in the world (e.g., a flower, a face, an apple, etc.) The error
suppressing/information elevating s described above with respect to the image
processing subnetwork 208 is repeated within the generalist subnetwork 210 and between the
image processing and list subnetworks 208, 210.
The image processing and generalist subnetworks 208, 210 together form a
nonlinear, logistic regression network with error suppression/learning ion and back
propagation that is configured to scan pixels of captured user images 212 and output at the
classifier layer 206f a classification decision. The classifier layer 206f includes two nodes:
(1) a positive/identified node 202a (e.g., Sam); and (2) a negative/unidentified node 202b
(e.g., not Sam).
Figure 8 depicts a neural network 200 according to another embodiment. The
neural network 200 depicted in Figure 8 is similar to the one depicted in Figure 7, except that
two onal layers are added between the generalist subnetwork 210 and the fier
layer 206f. In the network 200 depicted in figure 8, the information from the fifth layer 206e
is passed onto a sixth (“tuning”) layer 206g. The tuning layer 206g is configured to modify
the image 212 data to take into account the variance caused by the user’s distinctive eye
movements. The tuning layer 206g tracks the user’s eye movement over time and modifies
the image 212 data to remove artifacts caused by those nts.
Figure 8 also depicts a seventh (“specialist”) layer 206h disposed between the
tuning layer 206g and the classifier layer 206f. The specialist layer 206h may be a small back
propagation specialist network comprising l layers. The specialist layer 206h is
ured to compare the user’s image 212 data against data derived from other similar
images from a database of images (for instance, located on a cloud). The specialist layer
206h is further configured to identify all known images that the image recognition and
list networks 208, 210, and the tuning layer 206g may confuse with the image 212 data
from the user. In the case of iris ition for example, there may be 20,000 irises out of
the 7 billion people in the world that may be confused with the iris of any particular user.
The specialist layer 206h includes a node 202 for each potentially confusing
image that is configured to distinguish the user image 212 data from the respective potentially
ing image. For instance, the specialist layer 206h may include a node 202c configured
to distinguish Sam’s iris from Tom’s iris, and a node 202d configured to distinguish Sam’s
iris from Anne’s iris. The specialist layer 206h may utilize other characteristics, such as
eyebrow shape and eye shape, to distinguish the user from the potentially confusing other
images. Each node 202 in the specialist layer 206h may include only around 10 extra
operations due to the highly specialized nature of the function performed by each node 202.
The output from the specialist layer or network 206h is passed on to the classifier layer 206h.
Figure 9 depicts a single e vector, which may be thousands of nodes
long. In some embodiments, every node 202 in a neural network 200, for instance those
depicted in Figures 7 and 8, may report to a node 202 in the e vector.
While the networks 200 illustrated in Figures 7, 8 and 9 depict information
ing only between adjacent layers 206, most networks 200 include communication
between all layers (these communications have been omitted from Figures 7, 8 and 9 for
clarity). The networks 200 ed in Figures 7, 8 and 9 form deep belief or convolutional
neural networks with nodes 202 having deep connectivity to different layers 206. Using back
propagation, weaker nodes are set to a zero value and learned connectivity patterns are passed
up in the network 200. While the networks 200 illustrated in Figures 7, 8 and 9 have specific
numbers of layers 206 and nodes 202, networks 200 according to other embodiments includes
ent (fewer or more) numbers of layers 206 and nodes 202.
Having described several ments of neural networks 200, a method 300
of making a classification decision (Sam/not Sam) using iris image information and the
above-described neural networks 200 will now be discussed. As shown in Figure 10, the
classification method 300 begins at step 302 with the image recognition subnetwork 208
analyzing the user’s iris image 212 data to determine the basic shapes are in that image 212
data. At step 304, the generalist subnetwork 210 analyzes the shape data from the image
recognition work 208 to determine a ry for the iris image 212 data. In some
embodiments, the ory” can be “Sam” or “not Sam.” In such embodiments, this
rization may sufficiently identify the user.
In other embodiments, an example of which is ed in Figure 11, the
“category” can be a plurality of potential user identities including “Sam.” Steps 302 and 304
in Figure 11 are identical to those in Figure 10. At step 306, the tuning layer 206g modifies
the image shape and ry data to remove artifacts caused by user’s eye movements.
Processing the data with the tuning layer 206g renders the data resilient to imperfect images
212 of a user’s eye, for instance distortions caused by extreme angles. At step 308, the
specialist layer/subnetwork 206h optionally builds itself by adding nodes 202 configured to
distinguish the user’s iris from every known potentially confusing iris in one or more
databases, with a unique node for each unique potentially confusing iris. In some
embodiments, step 308 may be performed when the AR/user identification system is first
calibrated for its authorized user and after the user’s identity is established using other (e.g.,
more traditional) methods. At step 310, the specialist layer/subnetwork 206h runs the
“category” data fromthe generalist subnetwork 210 and the tuning layer 206g through each
node 202 in the specialist layer/subnetwork 206h to reduce the ion in the “category”
until only “Sam” or “not Sam” remain.
The above-described neural networks 200 and user identification methods 300
provide more accurate and precise user identification from user characteristics while
minimizing computing/processing requirements.
Secure Financial Transactions
As discussed above, passwords or sign up/login/authentication codes may be
eliminated from individual secure transactions using the AR/user identification systems and
s bed above. The subject system can pre -identify/pre-authenticate a user with a
very high degree of certainty. Further, the system can maintain the identification of the user
over time using periodic monitoring. Therefore, the identified user can have t access to
any site after a notice (that can be displayed as an overlaid user interface item to the user)
about the terms of that site. In one embodiment the system may create a set of standard terms
predetermined by the user, so that the user tly knows the conditions on that site. If a
site does not adhere to this set of conditions (e.g., the standard terms), then the subject system
may not automatically allow access or transactions therein.
For example, the above-described AR/user identification systems can be used
to facilitate “micro-transactions.” Micro-transactions which generate very small debits and
credits to the user’s financial account, typically on the order of a few cents or less than a cent.
On a given site, the subject system may be configured to see that the user not only viewed or
used some content but for how long (a quick browse might be free, but over a n amount
would be a charge). In various embodiments, a news article may cost 1/3 of a cent; a book
may be charged at a penny a page; music at 10 cents a listen, and so on. In another
embodiment, an advertiser may pay a user half a cent for selecting a banner ad or taking a
survey. The system may be configured to apportion a small percentage of the transaction fee
to the service provider.
In one embodiment, the system may be utilized to create a ic ransaction
account, controllable by the user, in which funds related to micro-transactions are
ated and distributed in predetermined meaningful amounts to/from the user’s more
traditional financial account (e.g., an online banking account). The transaction account
may be cleared or funded at regular intervals (e.g., quarterly) or in response to certain triggers
(e.g., when the user exceeds several dollars spent at a particular e).
Since the subject system and functionality may be provided by a company
focused on augmented reality, and since the user’s ID is very certainly and securely known,
the user may be provided with instant access to their accounts, 3-D view of amounts,
spending, rate of spending and graphical and/or phical map of that spending. Such
users may be allowed to instantly adjust spending access, including turning spending (e.g.,
micro-transactions) off and on.
In another embodiment, parents may have similar access to their en’s
accounts. Parents can set policies to allow no more than an amount of spending, or a certain
percentage for a certain ry and the like.
For macro-spending (e.g., amounts in dollars, not pennies or fraction of
s), various embodiments may be tated with the subject system configurations.
The user may use the system to order perishable goods for delivery to their
tracked location or to a user selected map location. The system can also notify the user when
ries arrive (e.g., by displaying video of a delivery being made in the AR system). With
AR telepresence, a user can be physically d in an office away from their house, but
admit a delivery person into their house, appear to the delivery person by avatar telepresence,
watch the delivery person as they deliver the product, then make sure the delivery person
leaves, and lock the door to their house by avatar telepresence.
Optionally, the system may store user product preferences and alert the user to
sales or other promotions related to the user’s preferred products. For these macro-spending
embodiments, the user can see their account summary, all the statistics of their account and
buying patterns, y facilitating comparison shopping before placing their order.
Since the system may be utilized to track the eye, it can also enable “one
glance” shopping. For instance, a user may look at an object (say a robe in a hotel) and say,
“Iwant that, when my account goes back over $3,000.” The system would execute the
purchase when specific conditions (e.g., account balance greater than $3,000) are achieved.
The system/service provide can atives to established currency systems,
similar to BITCOIN or equivalent alternative currency system, indexed to the very reliable
fication of each person using the subject technology. Accurate and precise
identification of users reduces the opportunities for crime related to alternative currency
Secure Communications
In one embodiment, iris and/or retinal signature data may be used to secure
communications. In such an embodiment, the subject system may be configured to allow
text, image, and other content to be transmittable selectively to and displayable only on
trusted secure hardware devices, which allow access only when the user can be authenticated
based on one or more dynamically measured iris and/or retinal signatures. Since the AR
system display device projects directly onto the user’s retina, only the intended recipient
(identified by iris and/or retinal signature) may be able to view the protected content; and
further, e the viewing device actively monitors the users eye, the dynamically read iris
and/or retinal signatures may be recorded as proof that the content was in fact presented to
the user’s eyes (e.g., as a form of digital t, possibly anied by a verification
action such as executing a requested ce of eye movements).
Spoof detection may rule out attempts to use previous recordings of retinal
images, static or 2D retinal , generated images, etc. based on models of natural
ion expected. A unique fiducial/watermark may be ted and projected onto the
retinas to generate a unique retinal signature for auditing.
The described financial and ication s are ed as
examples of various common systems that can benefit from more accurate and precise user
identification. Accordingly, use of the AR/user identification s described herein is not
limited to the disclosed financial and communication systems, but rather applicable to any
system that requires user fication.
Various exemplary embodiments of the invention are described herein.
Reference is made to these examples in a non-limiting sense. They are provided to illustrate
more broadly applicable aspects of the invention. Various changes may be made to the
invention described and equivalents may be substituted without departing from the true spirit
and scope of the invention. In addition, many modifications may be made to adapt a
particular situation, material, composition of matter, process, process act(s) or step(s) to the
objective(s), spirit or scope of the invention. Further, as will be appreciated by those with
skill in the art that each of the dual variations described and illustrated herein has
discrete components and features which may be readily separated from or combined with the
features of any of the other several embodiments without departing from the scope or spirit of
the invention. All such modifications are intended to be within the scope of claims associated
with this disclosure.
The ion includes methods that may be performed using the subject
devices. The methods may comprise the act of providing such a suitable device. Such
provision may be performed by the end user. In other words, the “providing” act merely
requires the end user obtain, , approach, position, set-up, activate, power-up or
otherwise act to e the requisite device in the subject method. Methods recited herein
may be carried out in any order of the recited events which is logically possible, as well as in
the recited order of events.
ary embodiments of the invention, together with details regarding
material selection and manufacture have been set forth above. As for other s of the
invention, these may be appreciated in connection with the above-referenced s and
publications as well as generally known or appreciated by those with skill in the art. The
same may hold true with respect to method-based embodiments of the invention in terms of
additional acts as commonly or logically employed.
In addition, though the invention has been bed in reference to several
examples optionally incorporating s features, the invention is not to be limited to that
which is described or indicated as contemplated with respect to each variation of the
invention. Various changes may be made to the invention described and equivalents
(whether recited herein or not included for the sake of some brevity) may be substituted
without ing from the true spirit and scope of the invention. In addition, where a range
of values is provided, it is understood that every intervening value, between the upper and
lower limit of that range and any other stated or intervening value in that stated range, is
encompassed within the invention.
Also, it is contemplated that any optional feature of the inventive variations
bed may be set forth and claimed ndently, or in ation with any one or
more of the features described herein. Reference to a singular item, includes the possibility
that there are plural of the same items present. More specifically, as used herein and in
claims associated hereto, the singular forms “a,” “an,” “said,” and “the” include plural
referents unless the specifically stated otherwise. In other words, use of the articles allow for
“atleast one” of the subject item in the description above as well as claims associated with
this disclosure. It is further noted that such claims may be drafted to exclude any optional
element. As such, this ent is intended to serve as antecedent basis for use of such
exclusive terminology as “solely,” “only” and the like in connection with the recitation of
claim elements, or use of a “negative” tion.
] Without the use of such exclusive terminology, the term “comprising” in
claims associated with this disclosure shall allow for the inclusion of any additional t--
ective of whether a given number of elements are enumerated in such claims, or the
addition of a feature could be regarded as transforming the nature of an element set forth in
such claims. Except as specifically d herein, all technical and scientific terms used
herein are to be given as broad a commonly understood meaning as possible while
maintaining claim validity.
] The breadth of the invention is not to be limited to the examples provided
and/or the subject specification, but rather only by the scope of claim language associated
with this disclosure.
In the foregoing specification, the invention has been bed with reference
to ic embodiments thereof. It will, however, be evident that various modifications and
changes may be made thereto without departing from the broader spirit and scope of the
invention. For example, the above-described process flows are described with reference to a
particular ordering of process actions. r, the ordering of many of the described
process actions may be changed without affecting the scope or operation of the invention.
The specification and drawings are, accordingly, to be regarded in an illustrative rather than
restrictive sense.
Claims (11)
1. A method of identifying a user of an augmented reality (AR) system, comprising: a camera of the AR system capturing an image; a sor of the AR system analyzing the image; the processor of the AR system identifying a shape based on analyzing the image; the processor of the AR system analyzing the shape; the processor of the AR system identifying a general object category based on analyzing the shape; the processor of the AR system identifying a narrow object ry by comparing the shape with a characteristic based on the l object category; and the processor of the AR system generating a classification decision based on the narrow object category, wherein the characteristic is from a known potentially confusing mismatched individual.
2. The method of claim 1, further comprising the processor of the AR system identifying an error in a piece of data.
3. The method of claim 2, r comprising the processor of the AR system suppressing the piece of data in which the error is identified.
4. The method of any one of claims 1 to 3, wherein analyzing the image comprises scanning a plurality of pixels of the image.
5. The method of any one of claims 1 to 4, wherein the image corresponds to an eye of the user.
6. The method of any one of claims 1 to 5, further comprising the processor of the AR system generating a network of characteristics, wherein each respective characteristic of the network is associated with a potentially confusing mismatched individual in a se.
7. The method of claim 6, wherein the network of characteristics is generated when the system is first calibrated for the user.
8. The method of claim 1, n the characteristic is selected from the group consisting of eyebrow shape and eye shape.
9. The method of claim 1, r comprising an eye ng camera of the AR system ng the user's eye movements over time.
10. The method of claim 9, further comprising the processor of the AR system modifying the general object category based on the eye movements of the user.
11. The method of claim 9, further comprising the processor of the AR system modifying the general object category to conform to a variance resulting from the eye movements of the user.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562159593P | 2015-05-11 | 2015-05-11 | |
US62/159,593 | 2015-05-11 | ||
PCT/US2016/031499 WO2016183020A1 (en) | 2015-05-11 | 2016-05-09 | Devices, methods and systems for biometric user recognition utilizing neural networks |
Publications (2)
Publication Number | Publication Date |
---|---|
NZ736574A NZ736574A (en) | 2021-02-26 |
NZ736574B2 true NZ736574B2 (en) | 2021-05-27 |
Family
ID=
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11216965B2 (en) | Devices, methods and systems for biometric user recognition utilizing neural networks | |
JP6938697B2 (en) | A method for registering and authenticating a user in an authentication system, a face recognition system, and a method for authenticating a user in an authentication system. | |
US20160358181A1 (en) | Augmented reality systems and methods for tracking biometric data | |
US20230031087A1 (en) | Method and system to autonomously authenticate and validate users using a node server and database | |
JP6550460B2 (en) | System and method for identifying eye signals, and continuous biometric authentication | |
AU2016262579B2 (en) | Augmented reality systems and methods for tracking biometric data | |
CN104036169B (en) | Biological authentication method and biological authentication apparatus | |
CN103616998B (en) | User information acquiring method and user profile acquisition device | |
CN103631503B (en) | Information interacting method and information interactive device | |
NZ736574B2 (en) | Methods for biometric user recognition | |
JP2015123262A (en) | Sight line measurement method using corneal surface reflection image, and device for the same | |
NZ736861B2 (en) | Augmented reality systems and methods for tracking biometric data |