US20200285875A1 - Process for updating templates used in facial recognition - Google Patents
Process for updating templates used in facial recognition Download PDFInfo
- Publication number
- US20200285875A1 US20200285875A1 US16/708,770 US201916708770A US2020285875A1 US 20200285875 A1 US20200285875 A1 US 20200285875A1 US 201916708770 A US201916708770 A US 201916708770A US 2020285875 A1 US2020285875 A1 US 2020285875A1
- Authority
- US
- United States
- Prior art keywords
- user
- memory
- template
- templates
- feature vector
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G06K9/00926—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G06K9/00255—
-
- G06K9/00281—
-
- G06K9/00288—
-
- G06K9/2036—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/145—Illumination specially adapted for pattern recognition, e.g. using gratings
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/50—Maintenance of biometric data or enrolment thereof
Definitions
- Embodiments described herein relate to methods and systems for face detection and recognition in images captured by a camera on a device.
- Biometric authentication processes are being used more frequently to allow users to more readily access their devices without the need for passcode or password authentication.
- One example of a biometric authentication process is fingerprint authentication using a fingerprint sensor.
- Facial recognition is another biometric process that may be used for authentication of an authorized user of a device. Facial recognition processes are generally used to identify individuals in an image and/or compare individuals in images to a database of individuals to match the faces of individuals.
- the facial recognition system For authentication using facial recognition, the facial recognition system generally struggles to adapt to changes in the authorized user's facial features over time so that the user may continue to access the device using facial recognition even as facial features of the user change and create differences in images of the user. For example, the user's facial features may change over time due to facial hair changes, haircuts, gaining/losing weight, and/or aging.
- the facial recognition system needs to remain secure.
- challenges balancing the need to adapt to the changes while also ensuring that the differences are recognized as changes in the user and not differences between the user and another person to inhibit unwanted access to the device generally preclude adaptation.
- Templates for facial recognition may be generated from enrollment images of the user obtained by a camera associated with a device. Images to be used for enrollment may be selected from the images captured during an enrollment process based on having certain acceptable criteria (e.g., pose is proper, not much occlusion of user, user in field of view, eyes are not closed, etc.). The selected enrollment images may be encoded to generate templates, where the templates include feature vectors to describe the facial features of the user.
- a captured image of the user may be encoded to generate feature vectors for an “unlock” image (e.g., an image captured to unlock the device).
- the feature vectors for the unlock image may be compared to the templates to determine if the unlock image matches the user's image represented by the templates. For example, a matching score may be assessed by comparing the feature vectors generated from the unlock image to the feature vectors in the templates. The matching score may be higher the less distance there is between the feature vectors for the unlock image and the feature vectors in the templates in the feature space (e.g., the matching score is higher when the feature vectors are more similar). If the matching score is above a threshold value, the device is unlocked.
- the features vectors of the unlock image are added as a temporary template in an additional (e.g., backup) storage space of the device.
- the temporary template is only added if the matching score for the unlock image is above a second threshold value that is above the unlock threshold (e.g., adding the temporary template requires a closer match than unlocking the device).
- the temporary template may be compared to additional unlock images obtained during attempted unlocking of the device. If the temporary template continues to match the additional unlock attempt images for a certain number or percentage of unlock attempts, the temporary template may be added to the templates (e.g., the template space) created from the enrollment images based on the confidence in the temporary template.
- FIG. 1 depicts a representation of an embodiment of a device including a camera.
- FIG. 2 depicts a representation of an embodiment of a camera.
- FIG. 3 depicts a representation of an embodiment of a processor on a device.
- FIG. 4 depicts a flowchart of an embodiment of an image enrollment process for an authorized user of a device.
- FIG. 5 depicts a representation of an embodiment of a feature space with feature vectors after an enrollment process.
- FIG. 6 depicts a representation of an embodiment of a template space of a memory.
- FIG. 7 depicts a flowchart of an embodiment of facial recognition authentication process.
- FIG. 8 depicts a flowchart of an embodiment of a template update process.
- FIG. 9 depicts a representation of an embodiment of a template space represented as a feature space.
- FIG. 10 depicts a flowchart of an embodiment of a template update sub-process.
- FIG. 11 depicts a flowchart of an additional embodiment of a template update process.
- FIG. 12 depicts a representation of an additional embodiment of a template space represented as a feature space.
- FIG. 13 depicts a block diagram of one embodiment of an exemplary computer system.
- FIG. 14 depicts a block diagram of one embodiment of a computer accessible storage medium.
- circuits, or other components may be described as “configured to” perform a task or tasks.
- “configured to” is a broad recitation of structure generally meaning “having circuitry that” performs the task or tasks during operation.
- the unit/circuit/component can be configured to perform the task even when the unit/circuit/component is not currently on.
- the circuitry that forms the structure corresponding to “configured to” may include hardware circuits and/or memory storing program instructions executable to implement the operation.
- the memory can include volatile memory such as static or dynamic random access memory and/or nonvolatile memory such as optical or magnetic disk storage, flash memory, programmable read-only memories, etc.
- the hardware circuits may include any combination of combinatorial logic circuitry, clocked storage devices such as flops, registers, latches, etc., finite state machines, memory such as static random access memory or embedded dynamic random access memory, custom designed circuitry, programmable logic arrays, etc.
- various units/circuits/components may be described as performing a task or tasks, for convenience in the description. Such descriptions should be interpreted as including the phrase “configured to.” Reciting a unit/circuit/component that is configured to perform one or more tasks is expressly intended not to invoke 35 U.S.C. ⁇ 112(f) interpretation for that unit/circuit/component.
- hardware circuits in accordance with this disclosure may be implemented by coding the description of the circuit in a hardware description language (HDL) such as Verilog or VHDL.
- HDL hardware description language
- the HDL description may be synthesized against a library of cells designed for a given integrated circuit fabrication technology, and may be modified for timing, power, and other reasons to result in a final design database that may be transmitted to a foundry to generate masks and ultimately produce the integrated circuit.
- Some hardware circuits or portions thereof may also be custom-designed in a schematic editor and captured into the integrated circuit design along with synthesized circuitry.
- the integrated circuits may include transistors and may further include other circuit elements (e.g. passive elements such as capacitors, resistors, inductors, etc.) and interconnect between the transistors and circuit elements.
- Some embodiments may implement multiple integrated circuits coupled together to implement the hardware circuits, and/or discrete elements may be used in some embodiments.
- the present disclosure further contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices.
- such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure.
- privacy policies and/or privacy practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure.
- personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection should occur only after receiving the informed consent of the users.
- such entities would take any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices.
- the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data.
- the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services.
- FIG. 1 depicts a representation of an embodiment of a device including a camera.
- device 100 includes camera 102 , processor 104 , memory 106 , and display 108 .
- Device 100 may be a small computing device, which may be, in some cases, small enough to be handheld (and hence also commonly known as a handheld computer or simply a handheld).
- device 100 is any of various types of computer systems devices which are mobile or portable and which perform wireless communications using WLAN communication (e.g., a “mobile device”). Examples of mobile devices include mobile telephones or smart phones, and tablet computers.
- device 100 includes any device used by a user with processor 104 , memory 106 , and display 108 .
- Display 108 may be, for example, an LCD screen or touchscreen.
- display 108 includes a user input interface for device 100 (e.g., the display allows interactive input for the user).
- Camera 102 may be used to capture images of the external environment of device 100 .
- camera 102 is positioned to capture images in front of display 108 .
- Camera 102 may be positioned to capture images of the user (e.g., the user's face) while the user interacts with display 108 .
- FIG. 2 depicts a representation of an embodiment of camera 102 .
- camera 102 includes one or more lenses and one or more image sensors 103 for capturing digital images.
- Digital images captured by camera 102 may include, for example, still images, video images, and/or frame-by-frame images.
- camera 102 includes image sensor 103 .
- Image sensor 103 may be, for example, an array of sensors. Sensors in the sensor array may include, but not be limited to, charge coupled device (CCD) and/or complementary metal oxide semiconductor (CMOS) sensor elements to capture infrared images (IR) or other non-visible electromagnetic radiation.
- CCD charge coupled device
- CMOS complementary metal oxide semiconductor
- camera 102 includes more than one image sensor to capture multiple types of images.
- camera 102 may include both IR sensors and RGB (red, green, and blue) sensors.
- camera 102 includes illuminators 105 for illuminating surfaces (or subjects) with the different types of light detected by image sensor 103 .
- camera 102 may include an illuminator for visible light (e.g., a “flash illuminator) and/or illuminators for infrared light (e.g., a flood IR source and a speckle pattern projector).
- the flood IR source and speckle pattern projector are other wavelengths of light (e.g., not infrared).
- illuminators 105 include an array of light sources such as, but not limited to, VCSELs (vertical-cavity surface-emitting lasers).
- image sensors 103 and illuminators 105 are included in a single chip package. In some embodiments, image sensors 103 and illuminators 105 are located on separate chip packages.
- image sensor 103 is an IR image sensor used to capture infrared images used for face detection and/or depth detection.
- illuminator 105 A may provide flood IR illumination to flood the subject with IR illumination (e.g., an IR flashlight) and image sensor 103 may capture images of the flood IR illuminated subject.
- Flood IR illumination images may be, for example, two-dimensional images of the subject illuminated by IR light.
- illuminator 105 B may provide IR illumination with a speckle pattern.
- the speckle pattern may be a pattern of light spots (e.g., a pattern of dots) with a known, and controllable, configuration and pattern projected onto a subject.
- Illuminator 105 B may include a VCSEL array configured to form the speckle pattern or a light source and patterned transparency configured to form the speckle pattern.
- the configuration and pattern of the speckle pattern provided by illuminator 105 B may be selected, for example, based on a desired speckle pattern density (e.g., dot density) at the subject.
- Image sensor 103 may capture images of the subject illuminated by the speckle pattern.
- the captured image of the speckle pattern on the subject may be assessed (e.g., analyzed and/or processed) by an imaging and processing system (e.g., an image signal processor (ISP) as described herein) to produce or estimate a three-dimensional map of the subject (e.g., a depth map or depth map image of the subject).
- ISP image signal processor
- Examples of depth map imaging are described in U.S. Pat. No. 8,150,142 to Freedman et al., U.S. Pat. No. 8,749,796 to Pesach et al., and U.S. Pat. No. 8,384,997 to Shpunt et al., which are incorporated by reference as if fully set forth herein, and in U.S. Patent Application Publication No. 2016/0178915 to Mor et al., which is incorporated by reference as if fully set forth herein.
- images captured by camera 102 include images with the user's face (e.g., the user's face is included in the images).
- An image with the user's face may include any digital image with the user's face shown within the frame of the image. Such an image may include just the user's face or may include the user's face in a smaller part or portion of the image.
- the user's face may be captured with sufficient resolution in the image to allow image processing of one or more features of the user's face in the image.
- FIG. 3 depicts a representation of an embodiment of processor 104 included in device 100 .
- Processor 104 may include circuitry configured to execute instructions defined in an instruction set architecture implemented by the processor.
- Processor 104 may execute the main control software of device 100 , such as an operating system.
- software executed by processor 104 during use may control the other components of device 100 to realize the desired functionality of the device.
- the processors may also execute other software. These applications may provide user functionality, and may rely on the operating system for lower-level device control, scheduling, memory management, etc.
- processor 104 includes image signal processor (ISP) 110 .
- ISP 110 may include circuitry suitable for processing images (e.g., image signal processing circuitry) received from camera 102 .
- ISP 110 may include any hardware and/or software (e.g., program instructions) capable of processing or analyzing images captured by camera 102 .
- processor 104 includes secure enclave processor (SEP) 112 .
- SEP 112 is involved in a facial recognition authentication process involving images captured by camera 102 and processed by ISP 110 .
- SEP 112 may be a secure circuit configured to authenticate an active user (e.g., the user that is currently using device 100 ) as authorized to use device 100 .
- a “secure circuit” may be a circuit that protects an isolated, internal resource from being directly accessed by an external circuit.
- the internal resource may be memory (e.g., memory 106 ) that stores sensitive data such as personal information (e.g., biometric information, credit card information, etc.), encryptions keys, random number generator seeds, etc.
- the internal resource may also be circuitry that performs services/operations associated with sensitive data.
- SEP 112 may include any hardware and/or software (e.g., program instructions) capable of authenticating a user using the facial recognition authentication process.
- the facial recognition authentication process may authenticate a user by capturing images of the user with camera 102 and comparing the captured images to previously collected images of an authorized user for device 100 .
- the functions of ISP 110 and SEP 112 may be performed by a single processor (e.g., either ISP 110 or SEP 112 may perform both functionalities and the other processor may be omitted).
- processor 104 performs an enrollment process (e.g., image enrollment process 200 , as shown in FIG. 4 , or a registration process) to capture images (e.g., the previously collected images) for an authorized user of device 100 .
- camera module 102 may capture (e.g., collect) images and/or image data from an authorized user in order to permit SEP 112 (or another security process) to subsequently authenticate the user using the facial recognition authentication process.
- the images and/or image data (e.g., feature data from the images) from the enrollment process are used to generate a template in device 100 .
- the template may be stored, for example, in a template space in memory 106 of device 100 .
- the template space may be updated by the addition and/or subtraction of images from the template.
- a template update process e.g., first template update process 300 and/or second template update process 400 described herein
- the template space may be updated with additional images to adapt to changes in the authorized user's appearance and/or changes in hardware performance over time. Images may be subtracted from the template space to compensate for the addition of images when the template space for storing template images is full.
- camera module 102 captures multiple pairs of images for a facial recognition session. Each pair may include an image captured using a two-dimensional capture mode (e.g., a flood IR image) and an image captured using a three-dimensional capture mode (e.g., a depth map image).
- ISP 110 and/or SEP 112 process the flood IR images and depth map images independently of each other before a final authentication decision is made for the user. For example, ISP 110 may process the images independently to determine characteristics of each image separately.
- SEP 112 may then compare the separate image characteristics with stored template images for each type of image to generate an authentication score (e.g., a matching score or other ranking of matching between the user in the captured image and in the stored template images) for each separate image.
- the authentication scores for the separate images e.g., the flood IR and depth map images
- ISP 110 and/or SEP 112 combine the images in each pair to provide a composite image that is used for facial recognition.
- ISP 110 processes the composite image to determine characteristics of the image, which SEP 112 may compare with the stored template images to make a decision on the identity of the user and, if authenticated, allow the user to use device 100 .
- the combination of flood IR image data and depth map image data may allow for SEP 112 to compare faces in a three-dimensional space.
- camera module 102 communicates image data to SEP 112 via a secure channel.
- the secure channel may be, for example, either a dedicated path for communicating data (i.e., a path shared by only the intended participants) or a dedicated path for communicating encrypted data using cryptographic keys known only to the intended participants.
- camera module 102 and/or ISP 110 may perform various processing operations on image data before supplying the image data to SEP 112 in order to facilitate the comparison performed by the SEP.
- processor 104 operates one or more machine learning models.
- Machine learning models may be operated using any combination of hardware and/or software (e.g., program instructions) located in processor 104 and/or on device 100 .
- one or more neural network modules 114 are used to operate the machine learning models on device 100 . Neural network modules 114 may be located in ISP 110 and/or SEP 112 .
- Neural network module 114 may include any combination of hardware and/or software (e.g., program instructions) located in processor 104 and/or on device 100 .
- neural network module 114 is a multi-scale neural network or another neural network where the scale of kernels used in the network can vary.
- neural network module 114 is a recurrent neural network (RNN) such as, but not limited to, a gated recurrent unit (GRU) recurrent neural network or a long short-term memory (LSTM) recurrent neural network.
- RNN recurrent neural network
- GRU gated recurrent unit
- LSTM long short-term memory
- Neural network module 114 may include neural network circuitry installed or configured with operating parameters that have been learned by the neural network module or a similar neural network module (e.g., a neural network module operating on a different processor or device). For example, a neural network module may be trained using training images (e.g., reference images) and/or other training data to generate operating parameters for the neural network circuitry. The operating parameters generated from the training may then be provided to neural network module 114 installed on device 100 . Providing the operating parameters generated from training to neural network module 114 on device 100 allows the neural network module to operate using training information programmed into the neural network module (e.g., the training-generated operating parameters may be used by the neural network module to operate on and assess images captured by the device).
- a similar neural network module e.g., a neural network module operating on a different processor or device.
- a neural network module may be trained using training images (e.g., reference images) and/or other training data to generate operating parameters for the neural network circuitry.
- FIG. 4 depicts a flowchart of an embodiment of image enrollment process 200 for an authorized user of device 100 .
- Process 200 may be used to create one or more templates of images (e.g., an enrollment profile) for an authorized user of device 100 that are stored in the device (e.g., in a memory coupled to SEP 112 ) and then used in a facial recognition process to allow the user to use the device (e.g., unlock the device).
- the enrollment profile e.g., template of images
- image enrollment process 200 may be associated that particular image enrollment process (and the images used to enroll during the process).
- an authorized user may create a first enrollment profile associated with the user that includes the user's face with glasses.
- the authorized user may also create a second enrollment profile associated with the user that includes the user's face without glasses.
- Each of the first and second enrollment profiles may then be used in the facial recognition process to allow the user to use the device (e.g., unlock the device).
- process 200 is used when device 100 is used a first time by the authorized user and/or when the user opts to enroll in a facial recognition process.
- process 200 may be initiated when device 100 is first obtained by the authorized user (e.g., purchased by the authorized user) and turned on for the first time by the authorized user.
- process 200 may be initiated by the authorized user when the user desires to enroll in a facial recognition process, update security settings for device 100 , and/or re-enroll.
- process 200 begins with authenticating the user in 202 .
- the user may be authenticated on device 100 using a non-facial authentication process.
- the user may be authenticated as an authorized user by entering a passcode, entering a password, or using another user authentication protocol other than facial recognition.
- one or more enrollment (e.g., reference or registration) images of the user are captured in 204 .
- the enrollment images may include images of the user illuminated by flood illuminator 105 A (e.g., flood IR images) and/or images of the user illuminated by speckle illuminator 105 B (e.g., depth map images).
- flood IR images and depth map images may be used independently and/or in combination in facial recognition processes on device 100 (e.g. the images may independently be used to provide an authentication decision and the decisions may be combined to determine a final decision on user authentication).
- the enrollment images may be captured using camera 102 as the user interacts with device 100 .
- the enrollment images may be captured as the user follows prompts on display 108 of device 100 .
- the prompts may include instructions for the user to make different motions and/or poses while the enrollment images are being captured.
- camera 102 may capture multiple images for each motion and/or pose performed by the user. Capturing images for different motions and/or different poses of the user where the images still have a relatively clear depiction of the user may be useful in providing a better variety of enrollment images that enable the user to be authenticated without having to be in a limited or restricted position relative to camera 102 on device 100 .
- selection of enrollment images for further image processing may be made in 206 .
- Selection of enrollment images 206 , and further processing of the images, may be performed by ISP 110 and/or SEP 112 .
- Selection of enrollment images for further processing may include selecting images that are suitable for use as template images.
- the selection of images that are suitable for use as template images in 206 may include assessing one or more selected criteria for the images and selecting images that meet the selected criteria. The selected images may be used as template images for the user.
- Selected criteria may include, but not be limited to, the face of the user being in the field of view of the camera, a pose of the user being proper (e.g., the user's face is not turned to far in any direction from the camera (i.e., the pitch, yaw, and/or roll of the face are not above certain levels) a distance to the face of the user being within a certain distance, the face of the user having occlusion below a minimum value (e.g., the user's face is not occluded (blocked) more than a minimum amount by another object), the user paying attention to the camera (e.g., eyes of the user looking at the camera), eyes of the user not being closed, and proper lighting (illumination) in the image.
- a pose of the user being proper e.g., the user's face is not turned to far in any direction from the camera (i.e., the pitch, yaw, and/or roll of the face are not above certain levels) a distance to the face of
- the enrollment image is rejected and not used (e.g., not selected) for further processing.
- Selection of images suitable for further processing may be rule based on the images meeting a certain number of the selected criteria or all of the selected criteria.
- occlusion maps and/or landmark feature maps are used in identifying features of the user (e.g., facial features such as eyes, nose, and mouth) in the images and assessing the selected criteria in the images.
- features of the user in the selected (template) images may be encoded in 208 .
- Encoding of the selected images may include encoding features (e.g., facial features) of the user to define the features in the images as one or more feature vectors in a feature space.
- Feature vectors 210 may be the output of the encoding in 208 .
- a feature space may be an n-dimensional feature space.
- a feature vector may be an n-dimensional vector of numerical values that define features from the image in the feature space (e.g., the feature vector may be a vector of numerical values that define facial features of the user in the image).
- FIG. 5 depicts a representation of an embodiment of feature space 212 with feature vectors 210 .
- Each feature vector 210 may define facial features for the user from either a single image, from a composite image (e.g., an image that is a composite of several images), or from multiple images.
- the feature vectors may be similar to one another because the feature vectors are associated with the same person and may have some “clustering”, as shown by circle 211 in FIG. 5 .
- Feature vectors 256 A and 256 B are feature vectors obtained from facial recognition process 250 , described below.
- process 200 may include, in 214 , storing feature vectors 210 in a memory of device 100 (e.g., a memory protected by SEP 112 ).
- feature vectors 210 are stored as static templates 216 (e.g., enrollment templates or reference templates) in a template space of the memory.
- static templates 216 include separate templates for feature vectors from the enrollment flood IR images and for feature vectors from the enrollment depth map images. It is to be understood that the separate templates for flood IR images and depth map images may be used independently and/or in combination during additional processes described herein.
- static templates 216 are described generically and it should be understood that static templates 216 (and the use of the templates) may refer to either templates for flood IR images or templates for depth map images. In some embodiments, a combination of the flood IR images and depth map images may be used. For example, pairs of flood IR images and depth map images may be stored in static templates 216 to be used in one or more facial recognition processes on device 100 .
- FIG. 6 depicts a representation of an embodiment of template space 220 of the memory.
- template space 220 includes static portion 222 and dynamic portion 224 .
- Static templates 216 may be, for example, added to static portion 222 of template space 220 (e.g., the templates are permanently added to the memory and are not deleted or changed unless the device is reset and another enrollment process takes place).
- static portion 222 includes a certain number of static templates 216 . For example, for the embodiment of template space 220 depicted in FIG. 6 , six templates may be allowed in static portion 222 .
- additional dynamic templates 226 may be added to dynamic portion 224 of template space 220 (e.g., a portion from which templates may be added and deleted without a device reset being needed).
- Static templates 216 may thus be enrollment templates (or reference templates) generated by enrollment process 200 .
- a selected number of static templates 216 are stored in static portion 222 of template space 220 .
- the number of static templates 216 stored in static portion 222 after enrollment process 200 may vary depending on, for example, the number of different feature vectors obtained during the enrollment process, which may be based on the number of images selected to be suitable for use as template images, or a desired number of templates for the device.
- static templates 216 include feature vectors 210 (e.g., the enrollment or reference feature vectors) that can be used for facial recognition of the authorized user. Template space 220 may then be used in a facial recognition authentication process.
- FIG. 7 depicts a flowchart of an embodiment of facial recognition authentication process 250 .
- Process 250 may be used to authenticate a user as an authorized user of device 100 using facial recognition of the user. Authentication of the authorized user may allow the user to access and use device 100 (e.g., unlock the device) and/or have access to a selected functionality of the device (e.g., unlocking a function of an application running on the device, payment systems (i.e., making a payment), access to personal data, expanded view of notifications, etc.).
- process 250 is used as a primary biometric authentication process for device 100 (after enrollment of the authorized user).
- process 250 is used as an authentication process in addition to another authentication process (e.g., fingerprint authentication, another biometric authentication, passcode entry, password entry, and/or pattern entry).
- another authentication process e.g., passcode entry, pattern entry, other biometric authentication
- camera 102 captures an image of the face of the user attempting to be authenticated for access to device 100 (e.g., the camera captures an “unlock attempt” image of the user).
- the unlock attempt image may be a single image of the face of the user (e.g., a single flood IR image or single depth map image) or the unlock attempt image may be a series of several images of the face of the user taken over a short period of time (e.g., one second or less).
- the series of several images of the face of the user includes pairs of flood IR images and depth map images (e.g., pairs of consecutive flood IR and depth map images).
- the unlock attempt image may be a composite of several images of the user illuminated by the flood illuminator and the speckle pattern illuminator.
- Camera 102 may capture the unlock attempt image in response to a prompt by the user.
- the unlock attempt image may be captured when the user attempts to access to device 100 by pressing a button (e.g., a home button or virtual button) on device 100 , by moving the device into a selected position relative to the user's face (e.g., the user moves the device such that the camera is pointed directly at the user's face), and/or by making a specific gesture or movement with respect to the device.
- a button e.g., a home button or virtual button
- unlock attempt images may include either flood IR images or depth map images, or a combination thereof.
- the unlock attempt images may be processed in association with their corresponding template (e.g., flood IR images with a template for flood IR enrollment images) independently or in combination as needed.
- the unlock attempt image is encoded to define the facial features of the user as one or more feature vectors in the feature space.
- one feature vector is defined for the unlock attempt image.
- more than one feature vector is defined for the unlock attempt image.
- Unlock feature vector(s) 256 may be the output of the encoding of the unlock attempt image in 254 .
- unlock feature vector(s) 256 are compared to feature vectors in the templates of template space 220 to get matching score 260 for the unlock attempt image.
- Matching score 260 may be a score of the differences between feature vector(s) 256 and feature vectors in template space 220 (e.g., feature vectors in static templates 216 and/or other dynamic templates 226 added to the template space as described herein). The closer (e.g., the less distance or less differences) that feature vector(s) 256 and the feature vectors in template space 220 are, the higher matching score 260 may be. For example, as shown in FIG.
- feature vector 256 A (open diamond) is closer to feature vectors 210 than feature vector 256 B (open diamond)(e.g., feature vector 256 B is a further outlier than feature vector 256 A).
- feature vector 256 A would have a higher matching score than feature vector 256 A.
- the lower matching score for feature vector 256 B means less confidence that the face in the unlock attempt image associated with feature vector 256 B is the face of the authorized user from enrollment process 200 .
- comparing feature vector(s) 256 and templates from template space 220 to get matching score 260 includes using one or more classifiers or a classification-enabled network to classify and evaluate the differences between feature vector(s) 256 and templates from template space 220 .
- classifiers include, but are not limited to, linear, piecewise linear, nonlinear classifiers, support vector machines, and neural network classifiers.
- matching score 260 is assessed using distance scores between feature vector(s) 256 and templates from template space 220 .
- unlock threshold 264 may represent a minimum difference (e.g., distance in the feature space) in features (as defined by feature vectors) between the face of the authorized user and the face of the user in the unlock attempt image that device 100 requires in order to unlock the device (or unlock a feature on the device).
- unlock threshold 264 may be a threshold value that determines whether the unlock feature vectors (e.g., feature vectors 256 ) are similar enough (e.g., close enough) to the templates associated with the authorized user's face (e.g., static templates 216 in template space 220 ).
- unlock threshold 264 may be represented by circle 265 in feature space 212 , depicted in FIG. 5 .
- feature vector 256 A is inside circle 265 and thus feature vector 256 A would have matching score 260 above unlock threshold 264 .
- Feature vector 256 B is outside circle 265 and thus feature vector 256 B would have matching score 260 below unlock threshold 264 .
- unlock threshold 264 is set during manufacturing and/or by the firmware of device 100 .
- unlock threshold 264 is updated (e.g., adjusted) by device 100 during operation of the device as described herein.
- unlock threshold 264 i.e., the user's face in the unlock attempt image substantially matches the face of the authorized user
- the user in the unlock attempt image is authenticated as the authorized user of device 100 and the device is unlocked in 266 .
- unlock feature vectors 256 and matching score 260 are provided to first template update process 300 , shown in FIG. 8 , which may add or replace templates in template space 220 .
- matching score 260 is below unlock threshold 264 (e.g., not equal to or above the unlock threshold)
- device 100 is not unlocked in 268 .
- device 100 may be either locked or unlocked if matching score 260 is equal to unlock threshold 264 depending on a desired setting for the unlock threshold (e.g., tighter or looser restrictions). Additionally, either option for an equal matching score comparison may be also applied as desired for other embodiments described herein.
- a desired setting for the unlock threshold e.g., tighter or looser restrictions
- a number of unlock attempts is counted (e.g., the number of attempts to unlock device 100 with a different unlock attempt image captured in 252 ). If the number of unlock attempts in 270 is below a selected value (e.g., a threshold), then process 250 may be run again with another unlock attempt image (e.g., a new image of the user is captured (e.g., a flood IR image or a depth map image)).
- a new image of the user is captured (e.g., a flood IR image or a depth map image)).
- device 100 automatically captures the new image of the user's face without prompting the user.
- the user attempting to unlock device 100 may have additional image(s) of his/her face captured by camera 102 .
- device 100 is locked from further attempts to use facial authentication in 272 .
- an error message may be displayed (e.g., on display 108 ) indicating that facial recognition authentication process 250 has failed and/or the desired operation of device 100 is restricted or prevented from being performed.
- Device 100 may be locked from further attempts to use facial authentication in 272 for a specified period of time and/or until another authentication protocol is used to unlock the device.
- passcode unlock 274 may be used to unlock device 100 .
- Passcode unlock 274 may include using a passcode, a password, pattern entry, a different form of biometric authentication, or another authentication protocol to unlock device 100 .
- passcode unlock 274 includes providing a “use passcode/password/pattern” affordance that, when selected causes display of a passcode/password/pattern entry user interface, or a passcode/password/pattern entry user interface, or a “use fingerprint” prompt that, when displayed, prompts the user to place a finger on a fingerprint sensor for the device.
- unlock feature vectors 256 and matching score 260 are provided to second template update process 400 , shown in FIG. 11 .
- FIG. 8 depicts a flowchart of an embodiment of first template update process 300 .
- Process 300 may be used to update from template space 220 (shown in FIG. 6 ) with one or more additional dynamic templates 226 based on feature vector(s) 256 from process 250 .
- Process 300 may be used to update template space 220 for gradual changes in the appearance of the authorized user. For example, process 300 may update template space 220 for gradual changes in hair (e.g., hair color, hair length, and/or hair style), weight gain, weight loss, changes in glasses worn, or small disfigurement changes (e.g., black eyes, scars, etc.).
- Update template space 220 for gradual changes allows the authorized user to continue to access device 100 using facial recognition authentication process 250 .
- Process 300 may begin by assessing 302 if matching score 260 is above threshold 304 .
- Threshold 304 may be a threshold score for determining if feature vector(s) 256 are similar (e.g., close) enough to feature vectors 210 (from static templates 216 ) that feature vector(s) 256 may potentially be used as another template (e.g., the threshold score may determine if feature vectors 256 are within a certain distance of feature vectors 210 ).
- threshold 304 is greater than unlock threshold 264 (e.g., threshold 304 requires a higher matching score than unlock threshold 264 ).
- the threshold for feature vector(s) 256 becoming a template may be stricter than the threshold for unlocking the device.
- Threshold 304 may be set during manufacturing and/or by the firmware of device 100 .
- Threshold 304 may be updated (e.g., adjusted) by device 100 during operation of the device as described herein.
- process 300 if matching score 260 is below threshold 304 , then process 300 is stopped and feature vector(s) 256 are deleted from device 100 . In some embodiments, if matching score 260 is below threshold 304 , then process 300 continues with template update sub-process 300 A, described in FIG. 10 . If matching score 260 is above threshold 304 , then process 300 is continued. In some embodiments, after assessing 302 , one or more qualities in the unlock attempt image are assessed in 306 . For example, pose (e.g., pitch, yaw, and roll of the face), occlusion, attention, field of view, and/or distance in the unlock attempt image may be assessed in 306 .
- pose e.g., pitch, yaw, and roll of the face
- occlusion, attention, field of view, and/or distance in the unlock attempt image may be assessed in 306 .
- Pose and/or occlusion in the unlock attempt image may be assessed using the landmark and/or occlusion maps described herein.
- process 300 may be stopped.
- meeting suitable qualifications includes meeting selected criteria in the images for one or more of the assessed qualities described above.
- selected criteria may include, but not be limited to, the face of the user being in the field of view of the camera, a pose of the user being proper (e.g., the user's face is not turned to far in any direction from the camera (i.e., the pitch, yaw, and/or roll of the face are not above certain levels) a distance to the face of the user being within a certain distance, the face of the user having occlusion below a minimum value (e.g., the user's face is not occluded (blocked) more than a minimum amount by another object), the user paying attention to the camera (e.g., eyes of the user looking at the camera), eyes of the user not being closed, and proper lighting (illumination) in the image.
- a pose of the user being proper e.g., the user's face is not turned to far in any direction from the camera (i.e., the pitch, yaw, and/or roll of the face are not above certain levels) a distance to the face of the
- assessing qualities in 306 and 308 may occur in a different location within process 300 .
- assessing qualities in 306 and 308 may occur after comparing matching score 324 to threshold 326 or after comparing confidence score 332 to confidence score 334 in 336 , described below.
- process 300 continues, in 310 , with storing feature vector(s) 256 in a backup space in the memory of device 100 .
- the backup space in the memory may be, for example, a second space or temporary space in the memory that includes readable/writable memory and/or short term memory.
- Feature vector(s) 256 may be stored in the memory as temporary template 312 .
- process 300 continues by comparing the temporary template to feature vectors for additional unlock attempt images captured by device 100 for the authorized user.
- additional unlock attempt images are captured of the user (or users if unauthorized access is attempted) as the user(s) during additional (future) unlocking attempts of device 100 .
- the features of the face of the user in the additional unlock attempt images are encoded in 316 to generate feature vectors 318 .
- feature vectors 318 are compared to temporary template 312 to get matching score 322 .
- Matching score 322 may then be compared in 324 to threshold 326 .
- threshold 326 is unlock threshold 264 .
- threshold 326 is threshold 304 . If matching score 322 is above threshold 326 in 324 , then a successful attempt is counted in 328 . If matching score 322 is below threshold 326 in 324 , then an unsuccessful attempt is counted in 330 . Counts 328 and 330 may be continued until a desired number of unlock attempts are made (e.g., a desired number of comparisons of matching score 322 and threshold 326 ).
- the number of successful attempts in 328 out of the total number of unlock attempts may be used to assess confidence score 332 for temporary template 312 .
- confidence score 332 may be used to assess whether or not template 312 is added as dynamic template 226 to template space 220 , shown in FIG. 6 .
- the enrollment templates e.g., static templates 216 , shown in FIG. 6
- process 300 may be used to add additional templates to template space 220 .
- Additional templates may be added to dynamic portion 224 as dynamic templates 226 (e.g., a portion from which templates may be added and deleted without a device reset being needed).
- Dynamic templates 226 may be used in combination with static templates 216 in template space 220 for facial recognition authentication process 250 , as shown FIG. 7 .
- temporary templates 312 generated by process 300 are added to dynamic portion 224 as dynamic templates 226 , shown in FIG. 6 , when confidence score 332 for temporary template 312 is higher than a lowest confidence score of static templates 216 in static portion 222 .
- Confidence score 334 may be equal to a lowest confidence score for static templates 216 in static portion 222 assessed during the same unlock attempts used to assess confidence score 332 for temporary template 312 (e.g., the confidence score for the template with the lowest number of successful unlock attempts during the same unlock attempts using temporary template 312 ).
- Confidence score 334 may be assessed using the same threshold used for confidence score 332 (e.g., threshold 326 ).
- temporary template 312 is added, in 338 , as dynamic template 226 in dynamic portion 224 . For example, if temporary template 312 has 45 successful unlock attempts out of 50 total unlock attempts while one static template 216 only has 40 successful unlock attempts out of the same 50 total unlock attempts, then temporary template 312 may be added to dynamic portion 224 as one of dynamic templates 226 . If, in 336 , confidence score 332 is less than confidence score 334 , then temporary template 312 is ignored or deleted in 340 . Temporary templates 312 may be added until a maximum number of allowed dynamic templates 226 are stored in dynamic portion 224 .
- FIG. 9 depicts a representation of an embodiment of template space 220 represented as a feature space.
- static templates 216 are represented by circles
- dynamic templates 226 are represented by diamonds
- temporary template 312 is represented by a star.
- static templates 216 are not allowed to be replaced by temporary template 312 .
- temporary template 312 may replace one of dynamic templates 226 if temporary template 312 is less of an outlier than one of dynamic templates 226 .
- Statistical analysis of the feature vectors in the feature space correlating to template space 220 may generate a circle (e.g., circle 342 ) that most closely defines a maximum number of the feature vectors.
- circle 342 defines the feature vector for dynamic template 226 ′ as an outlier of the circle.
- the feature vector for dynamic template 226 ′ is more of an outlier than the feature vector for temporary template 312 .
- temporary template 312 may replace dynamic template 226 ′ in template space 220 . If temporary template 312 had been more of an outlier than each of dynamic templates 226 , then the temporary template may not have replaced any one of dynamic templates 226 .
- one or more thresholds for device 100 may be recalculated. As temporary template 312 is less of an outlier than dynamic template 226 ′ recalculation of the threshold(s) may further restrict the thresholds (e.g., raise the threshold for matching scores to require closer matching).
- the unlock threshold e.g., unlock threshold 264 , shown in FIG. 7
- a template update threshold e.g., threshold 304 , shown in FIG. 8
- FIG. 10 depicts a flowchart of an embodiment of template update sub-process 300 A.
- sub-process 300 A may proceed if matching score 260 is below threshold 304 but above unlock threshold 264 . Images with matching scores 260 in such a range (above unlock threshold 264 and below threshold 304 ) may have more uncertainty in matching than images that are above threshold 304 (while still being able to unlock device 100 ). Thus, these more uncertain images may be processed using sub-process 300 A.
- one or more qualities in the unlock attempt image are assessed in 350 .
- Assessing qualities of the unlock attempt image in 350 may be substantially similar assessing qualities in 306 and 308 , as shown in FIG. 8 .
- a determination may be made in 352 if there is space (e.g., room) in the backup space used for temporary templates 312 to store another temporary template (e.g., a determination if a maximum number of temporary templates 312 are stored in the backup space).
- the unlock attempt image (and its corresponding feature vectors) may be subject to delete policy 354 , as shown in FIG. 10 .
- delete policy 354 the feature vector(s) in the backup space (e.g., space for temporary templates 312 ) that has selected redundancy (e.g., is most redundant) to the existing features may be replaced in the backup space.
- the feature vectors for the unlock attempt image are added to the backup space as a temporary template (e.g., temporary template 312 ) in 356 .
- a temporary template e.g., temporary template 312
- the temporary template may be processed substantially as temporary template 312 (e.g., compared to additional unlock attempt images as shown in FIG. 8 ).
- the temporary template from sub-process 300 A is used as a template (e.g., temporary template 312 and/or dynamic template 226 ) for a selected amount of time.
- the amount of time allowed for use of the temporary template from sub-process 300 A may be limited (e.g., the temporary template has a limited lifetime).
- the selected amount of time is a maximum amount of successful unlock attempts using the temporary template from sub-process 300 A.
- first template update process 300 may be used to update a user's enrollment profile (e.g., templates in the template space) when device 100 is unlocked or accessed using facial authentication recognition process 250 .
- First template update process 300 may be used, for example, to update a user's enrollment profile in response to gradual changes in the user's appearance (e.g., weight gain/loss).
- facial features of an authorized user may have changed drastically, or at least to a large enough extent, that the user may encounter difficulty unlocking or accessing features (e.g., operations) on device 100 using facial authentication recognition process 250 , depicted in FIG. 7 .
- Drastic or large extent changes in the user's facial appearance may include, for example, shaving of a beard or mustache, getting a large scar or other disfigurement to the face, making drastic changes in makeup, making drastic hair changes.
- the user may also encounter difficulty in unlocking/accessing device 100 using facial authentication recognition process 250 if there was an error during the enrollment process and/or there are large differences between the user's environment during the unlock attempt and the time of enrollment.
- the user may then proceed with unlocking/accessing device 100 using the selected option and following additional audible and/or visual prompts as needed.
- the user's initial request for unlocking/accessing device 100 may be granted.
- device 100 may, at least temporarily, update the user's enrollment profile (e.g., using second template update process 400 described below) to allow the user to be able to unlock/access the device in future unlock attempts using facial authentication recognition process 250 despite the changes in the user's facial appearance that previously prevented the user from using the facial authentication recognition process to unlock/access the device.
- the user by successfully completing authentication using the selected option, may automatically be able to access device 100 using facial authentication recognition process 250 in future unlock attempts for at least a short period of time.
- FIG. 11 depicts a flowchart of an embodiment of second template update process 400 .
- Process 400 may be used when facial recognition authentication process 250 is unable to unlock device 100 but the device is unlocked using a passcode or other authentication protocol, as shown in FIG. 7 .
- process 400 may be used when device 100 is unlocked using the passcode immediately after the unlock attempt fails or within a specified time frame after the unlock attempt fails (e.g., in temporal proximity to the unlock attempt).
- process 400 is used to update template space 220 when facial features of the authorized user have changed to an extent that prevents feature vectors generated from an unlock attempt image (e.g., feature vectors 256 ) from being close enough (e.g., within the unlock threshold distance) to static templates 216 and/or dynamic templates 226 to allow device 100 to be unlocked using facial recognition authentication process 250 , shown in FIG. 7 .
- process 400 may be used for feature vector 256 B, which is depicted outside circle 265 (the unlock threshold circle) in FIG. 5 .
- facial recognition authentication process 250 Possible causes for the user to be able to unlock device 100 using facial recognition authentication process 250 include, but are not limited to, if the authorized user shaves a beard or mustache, gets a large scar or other disfigurement to the face, large changes in makeup, drastic hair change, or has another severe change in a facial feature, these changes may be immediate changes or “step changes” in the facial features of the authorized user that do not allow first template update process 300 to update template space 220 gradually over time.
- Second template update process 400 may begin by assessing 402 if matching score 260 is above threshold 404 .
- Threshold 404 may be a threshold score for determining if feature vector(s) 256 are similar (e.g., close) enough to feature vectors 210 (from static templates 216 ) that feature vector(s) 256 may potentially be used as another template.
- threshold 404 for process 400 is below unlock threshold 264 .
- Threshold 404 may be below unlock threshold 264 (e.g., more distance allowed between feature vectors and the templates) because the passcode (or other authentication) has been entered prior to beginning process 400 .
- the threshold for feature vector(s) 256 becoming a template in process 400 may be less strict than the threshold for unlocking the device and the threshold for process 300 , shown in FIG. 8 .
- Threshold 404 may, however, be set at a value that sets a maximum allowable distance between feature vectors 256 for the unlock attempt image and feature vectors for template space 220 . Setting the maximum allowable distance may be used to prevent a user that is not the authorized user but has the passcode for device 100 to be enabled for facial recognition authentication on the device.
- Threshold 404 may be set during manufacturing and/or by the firmware of device 100 .
- Threshold 404 may be updated (e.g., adjusted) by device 100 during operation of the device as described herein (e.g., after templates are added or replaced in template space 220 ).
- Process 404 may be stopped and feature vector(s) 256 are deleted from device 100 if matching score 260 is below threshold 404 . If matching score 260 is above threshold 404 , then process 400 is continued. In some embodiments, after assessing 404 , one or more qualities in the unlock attempt image are assessed in 406 . For example, pose (e.g., pitch, yaw, and roll of the face), occlusion, attention, field of view, and/or distance in the unlock attempt image may be assessed in 406 . In some embodiments, pose and/or occlusion in the unlock attempt image are assessed using the landmark and/or occlusion maps described herein. In 408 , if suitable qualifications (as described above) are not met, then process 400 may be stopped.
- pose e.g., pitch, yaw, and roll of the face
- occlusion, attention, field of view, and/or distance in the unlock attempt image may be assessed in 406 .
- process 400 continues in 410 , with storing feature vector(s) 256 in a backup space in the memory of device 100 .
- the backup space in the memory for process 400 may be a different backup space than used for process 300 .
- the backup space in the memory for process 400 may be a temporary space in the memory that includes readable/writable memory partitioned from backup space used for process 300 .
- Feature vector(s) 256 may be stored in the memory as temporary template 412 .
- temporary template 412 may be compared to feature vectors for additional images from failed facial recognition authentication unlock attempts of device 100 . For example, in process 400 additional unlock failed attempt images may be captured in 414 . If the correct passcode is entered in 416 , then feature vectors for the images captured in 414 may be encoded in 418 to generate feature vectors 420 .
- feature vectors 420 are compared to the feature vector(s) for temporary template 412 . Comparison of feature vectors 420 and the feature vector(s) for temporary template 412 may provide matching score 424 . Matching score 424 may be compared in 426 to threshold 428 . Threshold 428 may be, for example, a similarity threshold or a threshold that defines at least a minimum level of matching between the feature vector(s) for temporary template 412 and feature vectors 420 obtained from the additional images from failed facial recognition authentication attempts that are followed by entering of the passcode for device 100 . Thus, threshold 428 may be set at a value that ensures at least a minimum amount of probability that the change in the user's features that caused the failed unlock attempt and generated temporary template 412 is still present in the images from additional failed unlock attempts using facial recognition authentication.
- Counts 430 and 432 may be continued until a desired number of failed unlock attempts are made using facial recognition authentication (e.g., a desired number of comparisons of matching score 424 and threshold 428 ). Once the desired number of attempts is made, the number of successful matches in 430 out of the total number of failed unlock attempts (e.g., the sum of counts 430 and 432 ) may be used to assess confidence score 434 for temporary template 412 .
- Confidence score 434 may be used to assess whether or not template 412 is added as dynamic template 226 to template space 220 , shown in FIG. 6 .
- the step change may remain for a number of successive unlock attempts using facial recognition authentication. For example, if the user shaved a beard, then the step change should remain for at least some length of time (e.g., at least a week). In such embodiments, if a successful unlock attempt (or a desired number of successful unlock attempts) using facial recognition authentication occurs before a selected number of successive unlock attempts is reached (e.g., 10 or 15 unlock attempts), then temporary template 412 may be deleted from the backup space in the memory. In some embodiments, the assumption that the step change may remain for a number of successive unlock attempts may not apply (e.g., if the user's step change was due to temporary application of makeup).
- confidence score 434 is compared against threshold 438 to assess if the confidence score is greater than the threshold.
- Threshold 438 may be a threshold selected to ensure a minimum number of successful comparisons of matching score 424 and threshold 428 are reached before allowing template 412 to be added to template space 220 .
- temporary template 412 may be added to template space 220 or temporary template 412 may replace a template in the template space 220 (e.g., replace one of dynamic templates 226 ). If confidence score 434 is less than threshold 438 , then temporary template 412 may be ignored or deleted in 442 .
- temporary template 412 generated by process 400 may be added to dynamic portion 224 of template space 220 as one of dynamic templates 226 , shown in FIG. 6 .
- the passcode or other authentication
- temporary template 412 is added to template space 220 in 440 without a need for comparison to dynamic templates 226 already in dynamic portion 224 . If the maximum number of allowed dynamic templates 226 in dynamic portion 224 has not been reached, then temporary template 412 is added to the dynamic portion as one of dynamic templates 226 .
- temporary template 412 may replace one of dynamic templates 226 in the dynamic portion.
- the temporary template may replace one of dynamic templates 226 in dynamic portion 224 even if the temporary template is more of an outlier than each of dynamic templates 226 .
- temporary template 412 replaces the largest outlier of dynamic templates 226 regardless of the relative lie (e.g., outlie) of the temporary template.
- temporary template 412 may replace a dynamic template that is redundant (e.g., most redundant) to the existing dynamic templates even if the temporary template is more of an outlier than each of the dynamic templates.
- FIG. 12 depicts a representation of an embodiment of template space 220 represented as a feature space with a feature vector for temporary template 412 .
- static templates 216 are represented by circles
- dynamic templates 226 are represented by diamonds
- temporary template 412 is represented by a star.
- static templates 216 may not be replaced by temporary template 412 .
- temporary template 412 may replace one of dynamic templates 226 .
- Statistical analysis of the feature vectors in the feature space correlating to template space 220 may generate a circle (e.g., circle 444 ) that most closely defines a maximum number of the feature vectors.
- the feature vector for dynamic template 226 ′ is the largest outlier of each of the feature vectors for dynamic templates 226 .
- temporary template 412 may replace dynamic template 226 ′ in template space 220 regardless of the position of the feature vector for the temporary template.
- the addition of the feature vector for temporary template 412 shifts circle 444 towards the feature vector for temporary template 412 and may cause the feature vector for dynamic template 226 ′ to become the largest outlier of the circle.
- one or more thresholds for device 100 may be recalculated.
- a temporary template (e.g., either temporary template 312 or temporary template 412 ) may be used to unlock device 100 for a selected period of time while the temporary template is in the backup space of the memory (e.g., before the temporary template is added to template space 220 ).
- the temporary template may be used to unlock device 100 after the passcode (or other user authentication protocol) is used in combination with the temporary template.
- the passcode has been entered to unlock device 100 before temporary template 412 is generated and stored in the backup space of the device memory.
- Temporary template 412 may then be used to allow unlocking of device 100 using facial recognition authentication for a selected time period (e.g., a few days or a week). After the selected time period expires, if temporary template 412 has not been added to template space 220 , the user may be prompted for the passcode if facial recognition authentication of the user fails.
- multiple enrollment profiles are generated on device 100 .
- Multiple enrollment profiles may be generated, for example, to enroll multiple users on device 100 and/or to enroll multiple looks for a single user.
- Multiple looks for a single user may include looks that are substantially different and cannot be recognized using a single enrollment profile (e.g., user wears lots of makeup or has other drastic changes at different times of day/week).
- a single user can execute the enrollment process a first time to create first enrollment profile while wearing glasses and execute the enrollment process a second time to create a second enrollment profile while not wearing glasses.
- image enrollment process 200 may be used to generate each enrollment profiles as a separate enrollment profile on device 100 .
- process 200 may be used to create separate templates of enrollment images for each enrollment profile.
- the separate templates may be stored in different portions of the memory of device 100 (e.g., partitioned portions of the memory space used for storing the templates).
- facial recognition authentication process 250 may compare features in unlock attempt images to each of the different profiles (e.g., all the templates stored in memory). In certain embodiments, if a match is determined for any one of the enrollment profiles (e.g., the matching score is above the unlock threshold), then device 100 is unlocked. In some embodiments, if multiple enrollment profiles are stored on device 100 , the unlock threshold is increased (e.g., the requirement for matching is made more strict).
- the amount unlock threshold is increased is based on the distance in feature space between the feature vectors associated with the templates for the new enrollment profile and the feature vectors associated with templates in existing enrollment profile(s) (e.g., the more distance there is between feature vectors in the template for the new enrollment profile and feature vectors in existing enrollment profiles, the more the unlock threshold is increased).
- the new unlock threshold may also be adjusted based on a match history of the existing enrollment profiles (e.g., the more matches in the history of the existing profiles, the more strict the threshold may be).
- each enrollment profile may be associated with its own template update processes (e.g., each enrollment profile operates with its own first template update process 300 and second template update process 400 ).
- the enrollment profile that is matched with the unlock attempt image in process 250 may be processed (e.g., updated) using its corresponding first template update process 300 .
- each of the matching enrollment profiles may be processed (e.g., updated) using its respective first template update process 300 .
- the enrollment profile that has feature vectors closest (e.g., least distance) to the feature vectors of the unlock attempt image may be processed (e.g., updated) using its corresponding second template update process 400 .
- one or more process steps described herein may be performed by one or more processors (e.g., a computer processor) executing instructions stored on a non-transitory computer-readable medium.
- processors e.g., a computer processor
- process 200 , process 250 , process 300 , and process 400 shown in FIGS. 4, 7, 8, and 11 , may have one or more steps performed by one or more processors executing instructions stored as program instructions in a computer readable storage medium (e.g., a non-transitory computer readable storage medium).
- FIG. 13 depicts a block diagram of one embodiment of exemplary computer system 510 .
- Exemplary computer system 510 may be used to implement one or more embodiments described herein.
- computer system 510 is operable by a user to implement one or more embodiments described herein such as process 200 , process 250 , process 300 , and process 400 , shown in FIGS. 4, 7, 8, and 11 .
- computer system 510 includes processor 512 , memory 514 , and various peripheral devices 516 .
- Processor 512 is coupled to memory 514 and peripheral devices 516 .
- Processor 512 is configured to execute instructions, including the instructions for process 200 , process 250 , process 300 , and/or process 400 , which may be in software.
- processor 512 may implement any desired instruction set (e.g. Intel Architecture-32 (IA-32, also known as x86), IA-32 with 64 bit extensions, x86-64, PowerPC, Sparc, MIPS, ARM, IA-64, etc.).
- computer system 510 may include more than one processor.
- processor 512 may include one or more processors or one or more processor cores.
- Processor 512 may be coupled to memory 514 and peripheral devices 516 in any desired fashion.
- processor 512 may be coupled to memory 514 and/or peripheral devices 516 via various interconnect.
- one or more bridge chips may be used to coupled processor 512 , memory 514 , and peripheral devices 516 .
- Memory 514 may comprise any type of memory system.
- memory 514 may comprise DRAM, and more particularly double data rate (DDR) SDRAM, RDRAM, etc.
- DDR double data rate
- a memory controller may be included to interface to memory 514 , and/or processor 512 may include a memory controller.
- Memory 514 may store the instructions to be executed by processor 512 during use, data to be operated upon by the processor during use, etc.
- Peripheral devices 516 may represent any sort of hardware devices that may be included in computer system 510 or coupled thereto (e.g., storage devices, optionally including computer accessible storage medium 600 , shown in FIG. 14 , other input/output (I/O) devices such as video hardware, audio hardware, user interface devices, networking hardware, etc.).
- storage devices optionally including computer accessible storage medium 600 , shown in FIG. 14
- I/O input/output
- FIG. 14 a block diagram of one embodiment of computer accessible storage medium 600 including one or more data structures representative of device 100 (depicted in FIG. 1 ) included in an integrated circuit design and one or more code sequences representative of process 200 , process 250 , process 300 , and/or process 400 (shown in FIGS. 4, 7, 8, and 11 ).
- Each code sequence may include one or more instructions, which when executed by a processor in a computer, implement the operations described for the corresponding code sequence.
- a computer accessible storage medium may include any storage media accessible by a computer during use to provide instructions and/or data to the computer.
- a computer accessible storage medium may include non-transitory storage media such as magnetic or optical media, e.g., disk (fixed or removable), tape, CD-ROM, DVD-ROM, CD-R, CD-RW, DVD-R, DVD-RW, or Blu-Ray.
- Storage media may further include volatile or non-volatile memory media such as RAM (e.g. synchronous dynamic RAM (SDRAM), Rambus DRAM (RDRAM), static RAM (SRAM), etc.), ROM, or Flash memory.
- RAM e.g. synchronous dynamic RAM (SDRAM), Rambus DRAM (RDRAM), static RAM (SRAM), etc.
- the storage media may be physically included within the computer to which the storage media provides instructions/data.
- the storage media may be connected to the computer.
- the storage media may be connected to the computer over a network or wireless link, such as network attached storage.
- the storage media may be connected through a peripheral interface such as the Universal Serial Bus (USB).
- USB Universal Serial Bus
- computer accessible storage medium 600 may store data in a non-transitory manner, where non-transitory in this context may refer to not transmitting the instructions/data on a signal.
- non-transitory storage may be volatile (and may lose the stored instructions/data in response to a power down) or non-volatile.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Collating Specific Patterns (AREA)
Abstract
Templates used for a facial recognition process for authentication of a user to use a device may be updated by the device as features of the user change over time. Features of the user may gradually change over time due to changes such as facial hair changes, haircuts, gaining/losing weight, and/or aging. Updating the templates used for the facial recognition process may allow the user to continue being authenticated as features of the user change without the need for additional enrollments of the user.
Description
- This patent claims priority to U.S. patent application Ser. No. 15/881,261, filed Jan. 26, 2018, now U.S. Pat. No. 10,503,992, which claims priority to U.S. Provisional Patent Application No. 62/539,739 to Mostafa et al., entitled “ONLINE LEARNING TO UPDATE TEMPLATES USED IN FACIAL RECOGNITION FOR CHANGES IN THE USER”, filed Aug. 1, 2017 and to U.S. Provisional Patent Application No. 62/556,850 to Mostafa et al., entitled “PROCESS FOR UPDATING TEMPLATES USED IN FACIAL RECOGNITION”, filed Sep. 11, 2017, both of which are incorporated by reference in their entirety.
- Embodiments described herein relate to methods and systems for face detection and recognition in images captured by a camera on a device.
- Biometric authentication processes are being used more frequently to allow users to more readily access their devices without the need for passcode or password authentication. One example of a biometric authentication process is fingerprint authentication using a fingerprint sensor. Facial recognition is another biometric process that may be used for authentication of an authorized user of a device. Facial recognition processes are generally used to identify individuals in an image and/or compare individuals in images to a database of individuals to match the faces of individuals.
- For authentication using facial recognition, the facial recognition system generally struggles to adapt to changes in the authorized user's facial features over time so that the user may continue to access the device using facial recognition even as facial features of the user change and create differences in images of the user. For example, the user's facial features may change over time due to facial hair changes, haircuts, gaining/losing weight, and/or aging. The facial recognition system, however, needs to remain secure. Thus, challenges balancing the need to adapt to the changes while also ensuring that the differences are recognized as changes in the user and not differences between the user and another person to inhibit unwanted access to the device generally preclude adaptation.
- Templates for facial recognition may be generated from enrollment images of the user obtained by a camera associated with a device. Images to be used for enrollment may be selected from the images captured during an enrollment process based on having certain acceptable criteria (e.g., pose is proper, not much occlusion of user, user in field of view, eyes are not closed, etc.). The selected enrollment images may be encoded to generate templates, where the templates include feature vectors to describe the facial features of the user.
- When the user attempts to gain access to the device using facial recognition authentication, a captured image of the user may be encoded to generate feature vectors for an “unlock” image (e.g., an image captured to unlock the device). The feature vectors for the unlock image may be compared to the templates to determine if the unlock image matches the user's image represented by the templates. For example, a matching score may be assessed by comparing the feature vectors generated from the unlock image to the feature vectors in the templates. The matching score may be higher the less distance there is between the feature vectors for the unlock image and the feature vectors in the templates in the feature space (e.g., the matching score is higher when the feature vectors are more similar). If the matching score is above a threshold value, the device is unlocked.
- In certain embodiments, when the device is unlocked, the features vectors of the unlock image are added as a temporary template in an additional (e.g., backup) storage space of the device. In some embodiments, the temporary template is only added if the matching score for the unlock image is above a second threshold value that is above the unlock threshold (e.g., adding the temporary template requires a closer match than unlocking the device).
- Once the temporary template is stored in the storage space, it may be compared to additional unlock images obtained during attempted unlocking of the device. If the temporary template continues to match the additional unlock attempt images for a certain number or percentage of unlock attempts, the temporary template may be added to the templates (e.g., the template space) created from the enrollment images based on the confidence in the temporary template.
- Features and advantages of the methods and apparatus of the embodiments described in this disclosure will be more fully appreciated by reference to the following detailed description of presently preferred but nonetheless illustrative embodiments in accordance with the embodiments described in this disclosure when taken in conjunction with the accompanying drawings in which:
-
FIG. 1 depicts a representation of an embodiment of a device including a camera. -
FIG. 2 depicts a representation of an embodiment of a camera. -
FIG. 3 depicts a representation of an embodiment of a processor on a device. -
FIG. 4 depicts a flowchart of an embodiment of an image enrollment process for an authorized user of a device. -
FIG. 5 depicts a representation of an embodiment of a feature space with feature vectors after an enrollment process. -
FIG. 6 depicts a representation of an embodiment of a template space of a memory. -
FIG. 7 depicts a flowchart of an embodiment of facial recognition authentication process. -
FIG. 8 depicts a flowchart of an embodiment of a template update process. -
FIG. 9 depicts a representation of an embodiment of a template space represented as a feature space. -
FIG. 10 depicts a flowchart of an embodiment of a template update sub-process. -
FIG. 11 depicts a flowchart of an additional embodiment of a template update process. -
FIG. 12 depicts a representation of an additional embodiment of a template space represented as a feature space. -
FIG. 13 depicts a block diagram of one embodiment of an exemplary computer system. -
FIG. 14 depicts a block diagram of one embodiment of a computer accessible storage medium. - While embodiments described in this disclosure may be susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the drawings and detailed description thereto are not intended to limit the embodiments to the particular form disclosed, but on the contrary, the intention is to cover all modifications, equivalents and alternatives falling within the spirit and scope of the appended claims. The headings used herein are for organizational purposes only and are not meant to be used to limit the scope of the description. As used throughout this application, the word “may” is used in a permissive sense (i.e., meaning having the potential to), rather than the mandatory sense (i.e., meaning must). Similarly, the words “include”, “including”, and “includes” mean including, but not limited to.
- Various units, circuits, or other components may be described as “configured to” perform a task or tasks. In such contexts, “configured to” is a broad recitation of structure generally meaning “having circuitry that” performs the task or tasks during operation. As such, the unit/circuit/component can be configured to perform the task even when the unit/circuit/component is not currently on. In general, the circuitry that forms the structure corresponding to “configured to” may include hardware circuits and/or memory storing program instructions executable to implement the operation. The memory can include volatile memory such as static or dynamic random access memory and/or nonvolatile memory such as optical or magnetic disk storage, flash memory, programmable read-only memories, etc. The hardware circuits may include any combination of combinatorial logic circuitry, clocked storage devices such as flops, registers, latches, etc., finite state machines, memory such as static random access memory or embedded dynamic random access memory, custom designed circuitry, programmable logic arrays, etc. Similarly, various units/circuits/components may be described as performing a task or tasks, for convenience in the description. Such descriptions should be interpreted as including the phrase “configured to.” Reciting a unit/circuit/component that is configured to perform one or more tasks is expressly intended not to invoke 35 U.S.C. § 112(f) interpretation for that unit/circuit/component.
- In an embodiment, hardware circuits in accordance with this disclosure may be implemented by coding the description of the circuit in a hardware description language (HDL) such as Verilog or VHDL. The HDL description may be synthesized against a library of cells designed for a given integrated circuit fabrication technology, and may be modified for timing, power, and other reasons to result in a final design database that may be transmitted to a foundry to generate masks and ultimately produce the integrated circuit. Some hardware circuits or portions thereof may also be custom-designed in a schematic editor and captured into the integrated circuit design along with synthesized circuitry. The integrated circuits may include transistors and may further include other circuit elements (e.g. passive elements such as capacitors, resistors, inductors, etc.) and interconnect between the transistors and circuit elements. Some embodiments may implement multiple integrated circuits coupled together to implement the hardware circuits, and/or discrete elements may be used in some embodiments.
- The scope of the present disclosure includes any feature or combination of features disclosed herein (either explicitly or implicitly), or any generalization thereof, whether or not it mitigates any or all of the problems addressed herein. Accordingly, new claims may be formulated during prosecution of this application (or an application claiming priority thereto) to any such combination of features. In particular, with reference to the appended claims, features from dependent claims may be combined with those of the independent claims and features from respective independent claims may be combined in any appropriate manner and not merely in the specific combinations enumerated in the appended claims.
- This specification includes references to “one embodiment” or “an embodiment.” The appearances of the phrases “in one embodiment” or “in an embodiment” do not necessarily refer to the same embodiment, although embodiments that include any combination of the features are generally contemplated, unless expressly disclaimed herein. Particular features, structures, or characteristics may be combined in any suitable manner consistent with this disclosure.
- The present disclosure further contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. For example, in the case of unlocking and/or authorizing devices using facial recognition, personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection should occur only after receiving the informed consent of the users. Additionally, such entities would take any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices.
- Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services.
-
FIG. 1 depicts a representation of an embodiment of a device including a camera. In certain embodiments,device 100 includescamera 102,processor 104,memory 106, anddisplay 108.Device 100 may be a small computing device, which may be, in some cases, small enough to be handheld (and hence also commonly known as a handheld computer or simply a handheld). In certain embodiments,device 100 is any of various types of computer systems devices which are mobile or portable and which perform wireless communications using WLAN communication (e.g., a “mobile device”). Examples of mobile devices include mobile telephones or smart phones, and tablet computers. Various other types of devices may fall into this category if they include wireless or RF communication capabilities (e.g., Wi-Fi, cellular, and/or Bluetooth), such as laptop computers, portable gaming devices, portable Internet devices, and other handheld devices, as well as wearable devices such as smart watches, smart glasses, headphones, pendants, earpieces, etc. In general, the term “mobile device” can be broadly defined to encompass any electronic, computing, and/or telecommunications device (or combination of devices) which is easily transported by a user and capable of wireless communication using, for example, WLAN, Wi-Fi, cellular, and/or Bluetooth. In certain embodiments,device 100 includes any device used by a user withprocessor 104,memory 106, anddisplay 108.Display 108 may be, for example, an LCD screen or touchscreen. In some embodiments,display 108 includes a user input interface for device 100 (e.g., the display allows interactive input for the user). -
Camera 102 may be used to capture images of the external environment ofdevice 100. In certain embodiments,camera 102 is positioned to capture images in front ofdisplay 108.Camera 102 may be positioned to capture images of the user (e.g., the user's face) while the user interacts withdisplay 108.FIG. 2 depicts a representation of an embodiment ofcamera 102. In certain embodiments,camera 102 includes one or more lenses and one ormore image sensors 103 for capturing digital images. Digital images captured bycamera 102 may include, for example, still images, video images, and/or frame-by-frame images. - In certain embodiments,
camera 102 includesimage sensor 103.Image sensor 103 may be, for example, an array of sensors. Sensors in the sensor array may include, but not be limited to, charge coupled device (CCD) and/or complementary metal oxide semiconductor (CMOS) sensor elements to capture infrared images (IR) or other non-visible electromagnetic radiation. In some embodiments,camera 102 includes more than one image sensor to capture multiple types of images. For example,camera 102 may include both IR sensors and RGB (red, green, and blue) sensors. In certain embodiments,camera 102 includes illuminators 105 for illuminating surfaces (or subjects) with the different types of light detected byimage sensor 103. For example,camera 102 may include an illuminator for visible light (e.g., a “flash illuminator) and/or illuminators for infrared light (e.g., a flood IR source and a speckle pattern projector). In some embodiments, the flood IR source and speckle pattern projector are other wavelengths of light (e.g., not infrared). In certain embodiments, illuminators 105 include an array of light sources such as, but not limited to, VCSELs (vertical-cavity surface-emitting lasers). In some embodiments,image sensors 103 and illuminators 105 are included in a single chip package. In some embodiments,image sensors 103 and illuminators 105 are located on separate chip packages. - In certain embodiments,
image sensor 103 is an IR image sensor used to capture infrared images used for face detection and/or depth detection. For face detection,illuminator 105A may provide flood IR illumination to flood the subject with IR illumination (e.g., an IR flashlight) andimage sensor 103 may capture images of the flood IR illuminated subject. Flood IR illumination images may be, for example, two-dimensional images of the subject illuminated by IR light. For depth detection or generating a depth map image,illuminator 105B may provide IR illumination with a speckle pattern. The speckle pattern may be a pattern of light spots (e.g., a pattern of dots) with a known, and controllable, configuration and pattern projected onto a subject.Illuminator 105B may include a VCSEL array configured to form the speckle pattern or a light source and patterned transparency configured to form the speckle pattern. The configuration and pattern of the speckle pattern provided byilluminator 105B may be selected, for example, based on a desired speckle pattern density (e.g., dot density) at the subject.Image sensor 103 may capture images of the subject illuminated by the speckle pattern. The captured image of the speckle pattern on the subject may be assessed (e.g., analyzed and/or processed) by an imaging and processing system (e.g., an image signal processor (ISP) as described herein) to produce or estimate a three-dimensional map of the subject (e.g., a depth map or depth map image of the subject). Examples of depth map imaging are described in U.S. Pat. No. 8,150,142 to Freedman et al., U.S. Pat. No. 8,749,796 to Pesach et al., and U.S. Pat. No. 8,384,997 to Shpunt et al., which are incorporated by reference as if fully set forth herein, and in U.S. Patent Application Publication No. 2016/0178915 to Mor et al., which is incorporated by reference as if fully set forth herein. - In certain embodiments, images captured by
camera 102 include images with the user's face (e.g., the user's face is included in the images). An image with the user's face may include any digital image with the user's face shown within the frame of the image. Such an image may include just the user's face or may include the user's face in a smaller part or portion of the image. The user's face may be captured with sufficient resolution in the image to allow image processing of one or more features of the user's face in the image. - Images captured by
camera 102 may be processed byprocessor 104.FIG. 3 depicts a representation of an embodiment ofprocessor 104 included indevice 100.Processor 104 may include circuitry configured to execute instructions defined in an instruction set architecture implemented by the processor.Processor 104 may execute the main control software ofdevice 100, such as an operating system. Generally, software executed byprocessor 104 during use may control the other components ofdevice 100 to realize the desired functionality of the device. The processors may also execute other software. These applications may provide user functionality, and may rely on the operating system for lower-level device control, scheduling, memory management, etc. - In certain embodiments,
processor 104 includes image signal processor (ISP) 110.ISP 110 may include circuitry suitable for processing images (e.g., image signal processing circuitry) received fromcamera 102.ISP 110 may include any hardware and/or software (e.g., program instructions) capable of processing or analyzing images captured bycamera 102. - In certain embodiments,
processor 104 includes secure enclave processor (SEP) 112. In some embodiments,SEP 112 is involved in a facial recognition authentication process involving images captured bycamera 102 and processed byISP 110.SEP 112 may be a secure circuit configured to authenticate an active user (e.g., the user that is currently using device 100) as authorized to usedevice 100. A “secure circuit” may be a circuit that protects an isolated, internal resource from being directly accessed by an external circuit. The internal resource may be memory (e.g., memory 106) that stores sensitive data such as personal information (e.g., biometric information, credit card information, etc.), encryptions keys, random number generator seeds, etc. The internal resource may also be circuitry that performs services/operations associated with sensitive data. As described herein,SEP 112 may include any hardware and/or software (e.g., program instructions) capable of authenticating a user using the facial recognition authentication process. The facial recognition authentication process may authenticate a user by capturing images of the user withcamera 102 and comparing the captured images to previously collected images of an authorized user fordevice 100. In some embodiments, the functions ofISP 110 andSEP 112 may be performed by a single processor (e.g., eitherISP 110 orSEP 112 may perform both functionalities and the other processor may be omitted). - In certain embodiments,
processor 104 performs an enrollment process (e.g.,image enrollment process 200, as shown inFIG. 4 , or a registration process) to capture images (e.g., the previously collected images) for an authorized user ofdevice 100. During the enrollment process,camera module 102 may capture (e.g., collect) images and/or image data from an authorized user in order to permit SEP 112 (or another security process) to subsequently authenticate the user using the facial recognition authentication process. In some embodiments, the images and/or image data (e.g., feature data from the images) from the enrollment process are used to generate a template indevice 100. The template may be stored, for example, in a template space inmemory 106 ofdevice 100. In some embodiments, the template space may be updated by the addition and/or subtraction of images from the template. A template update process (e.g., firsttemplate update process 300 and/or secondtemplate update process 400 described herein) may be performed byprocessor 104 to add and/or subtract template images from the template space. For example, the template space may be updated with additional images to adapt to changes in the authorized user's appearance and/or changes in hardware performance over time. Images may be subtracted from the template space to compensate for the addition of images when the template space for storing template images is full. - In some embodiments,
camera module 102 captures multiple pairs of images for a facial recognition session. Each pair may include an image captured using a two-dimensional capture mode (e.g., a flood IR image) and an image captured using a three-dimensional capture mode (e.g., a depth map image). In certain embodiments,ISP 110 and/orSEP 112 process the flood IR images and depth map images independently of each other before a final authentication decision is made for the user. For example,ISP 110 may process the images independently to determine characteristics of each image separately.SEP 112 may then compare the separate image characteristics with stored template images for each type of image to generate an authentication score (e.g., a matching score or other ranking of matching between the user in the captured image and in the stored template images) for each separate image. The authentication scores for the separate images (e.g., the flood IR and depth map images) may be combined to make a decision on the identity of the user and, if authenticated, allow the user to use device 100 (e.g., unlock the device). - In some embodiments,
ISP 110 and/orSEP 112 combine the images in each pair to provide a composite image that is used for facial recognition. In some embodiments,ISP 110 processes the composite image to determine characteristics of the image, whichSEP 112 may compare with the stored template images to make a decision on the identity of the user and, if authenticated, allow the user to usedevice 100. - In some embodiments, the combination of flood IR image data and depth map image data may allow for
SEP 112 to compare faces in a three-dimensional space. In some embodiments,camera module 102 communicates image data toSEP 112 via a secure channel. The secure channel may be, for example, either a dedicated path for communicating data (i.e., a path shared by only the intended participants) or a dedicated path for communicating encrypted data using cryptographic keys known only to the intended participants. In some embodiments,camera module 102 and/orISP 110 may perform various processing operations on image data before supplying the image data toSEP 112 in order to facilitate the comparison performed by the SEP. - In certain embodiments,
processor 104 operates one or more machine learning models. Machine learning models may be operated using any combination of hardware and/or software (e.g., program instructions) located inprocessor 104 and/or ondevice 100. In some embodiments, one or moreneural network modules 114 are used to operate the machine learning models ondevice 100.Neural network modules 114 may be located inISP 110 and/orSEP 112. -
Neural network module 114 may include any combination of hardware and/or software (e.g., program instructions) located inprocessor 104 and/or ondevice 100. In some embodiments,neural network module 114 is a multi-scale neural network or another neural network where the scale of kernels used in the network can vary. In some embodiments,neural network module 114 is a recurrent neural network (RNN) such as, but not limited to, a gated recurrent unit (GRU) recurrent neural network or a long short-term memory (LSTM) recurrent neural network. -
Neural network module 114 may include neural network circuitry installed or configured with operating parameters that have been learned by the neural network module or a similar neural network module (e.g., a neural network module operating on a different processor or device). For example, a neural network module may be trained using training images (e.g., reference images) and/or other training data to generate operating parameters for the neural network circuitry. The operating parameters generated from the training may then be provided toneural network module 114 installed ondevice 100. Providing the operating parameters generated from training toneural network module 114 ondevice 100 allows the neural network module to operate using training information programmed into the neural network module (e.g., the training-generated operating parameters may be used by the neural network module to operate on and assess images captured by the device). -
FIG. 4 depicts a flowchart of an embodiment ofimage enrollment process 200 for an authorized user ofdevice 100.Process 200 may be used to create one or more templates of images (e.g., an enrollment profile) for an authorized user ofdevice 100 that are stored in the device (e.g., in a memory coupled to SEP 112) and then used in a facial recognition process to allow the user to use the device (e.g., unlock the device). The enrollment profile (e.g., template of images) created byimage enrollment process 200 may be associated that particular image enrollment process (and the images used to enroll during the process). For example, an authorized user may create a first enrollment profile associated with the user that includes the user's face with glasses. The authorized user may also create a second enrollment profile associated with the user that includes the user's face without glasses. Each of the first and second enrollment profiles may then be used in the facial recognition process to allow the user to use the device (e.g., unlock the device). - In certain embodiments,
process 200 is used whendevice 100 is used a first time by the authorized user and/or when the user opts to enroll in a facial recognition process. For example,process 200 may be initiated whendevice 100 is first obtained by the authorized user (e.g., purchased by the authorized user) and turned on for the first time by the authorized user. In some embodiments,process 200 may be initiated by the authorized user when the user desires to enroll in a facial recognition process, update security settings fordevice 100, and/or re-enroll. - In certain embodiments,
process 200 begins with authenticating the user in 202. In 202, the user may be authenticated ondevice 100 using a non-facial authentication process. For example, the user may be authenticated as an authorized user by entering a passcode, entering a password, or using another user authentication protocol other than facial recognition. After the user is authenticated in 202, one or more enrollment (e.g., reference or registration) images of the user are captured in 204. The enrollment images may include images of the user illuminated byflood illuminator 105A (e.g., flood IR images) and/or images of the user illuminated byspeckle illuminator 105B (e.g., depth map images). As described herein, flood IR images and depth map images may be used independently and/or in combination in facial recognition processes on device 100 (e.g. the images may independently be used to provide an authentication decision and the decisions may be combined to determine a final decision on user authentication). - The enrollment images may be captured using
camera 102 as the user interacts withdevice 100. For example, the enrollment images may be captured as the user follows prompts ondisplay 108 ofdevice 100. The prompts may include instructions for the user to make different motions and/or poses while the enrollment images are being captured. During 204,camera 102 may capture multiple images for each motion and/or pose performed by the user. Capturing images for different motions and/or different poses of the user where the images still have a relatively clear depiction of the user may be useful in providing a better variety of enrollment images that enable the user to be authenticated without having to be in a limited or restricted position relative tocamera 102 ondevice 100. - After the multiple enrollment images are captured in 204, selection of enrollment images for further image processing may be made in 206. Selection of
enrollment images 206, and further processing of the images, may be performed byISP 110 and/orSEP 112. Selection of enrollment images for further processing may include selecting images that are suitable for use as template images. For example, the selection of images that are suitable for use as template images in 206 may include assessing one or more selected criteria for the images and selecting images that meet the selected criteria. The selected images may be used as template images for the user. Selected criteria may include, but not be limited to, the face of the user being in the field of view of the camera, a pose of the user being proper (e.g., the user's face is not turned to far in any direction from the camera (i.e., the pitch, yaw, and/or roll of the face are not above certain levels) a distance to the face of the user being within a certain distance, the face of the user having occlusion below a minimum value (e.g., the user's face is not occluded (blocked) more than a minimum amount by another object), the user paying attention to the camera (e.g., eyes of the user looking at the camera), eyes of the user not being closed, and proper lighting (illumination) in the image. In some embodiments, if more than one face is detected in an enrollment image, the enrollment image is rejected and not used (e.g., not selected) for further processing. Selection of images suitable for further processing may be rule based on the images meeting a certain number of the selected criteria or all of the selected criteria. In some embodiments, occlusion maps and/or landmark feature maps are used in identifying features of the user (e.g., facial features such as eyes, nose, and mouth) in the images and assessing the selected criteria in the images. - After images are selected in 206, features of the user in the selected (template) images may be encoded in 208. Encoding of the selected images may include encoding features (e.g., facial features) of the user to define the features in the images as one or more feature vectors in a feature space.
Feature vectors 210 may be the output of the encoding in 208. A feature space may be an n-dimensional feature space. A feature vector may be an n-dimensional vector of numerical values that define features from the image in the feature space (e.g., the feature vector may be a vector of numerical values that define facial features of the user in the image). -
FIG. 5 depicts a representation of an embodiment offeature space 212 withfeature vectors 210. Each feature vector 210 (black dot) may define facial features for the user from either a single image, from a composite image (e.g., an image that is a composite of several images), or from multiple images. Asfeature vectors 210 are generated from a single user's facial features, the feature vectors may be similar to one another because the feature vectors are associated with the same person and may have some “clustering”, as shown bycircle 211 inFIG. 5 .Feature vectors facial recognition process 250, described below. - As shown in
FIG. 4 ,process 200 may include, in 214, storingfeature vectors 210 in a memory of device 100 (e.g., a memory protected by SEP 112). In certain embodiments,feature vectors 210 are stored as static templates 216 (e.g., enrollment templates or reference templates) in a template space of the memory. In some embodiments, static templates 216 (and other templates described herein) include separate templates for feature vectors from the enrollment flood IR images and for feature vectors from the enrollment depth map images. It is to be understood that the separate templates for flood IR images and depth map images may be used independently and/or in combination during additional processes described herein. For simplicity in this disclosure,static templates 216 are described generically and it should be understood that static templates 216 (and the use of the templates) may refer to either templates for flood IR images or templates for depth map images. In some embodiments, a combination of the flood IR images and depth map images may be used. For example, pairs of flood IR images and depth map images may be stored instatic templates 216 to be used in one or more facial recognition processes ondevice 100. -
FIG. 6 depicts a representation of an embodiment oftemplate space 220 of the memory. In some embodiments,template space 220 includesstatic portion 222 anddynamic portion 224.Static templates 216 may be, for example, added tostatic portion 222 of template space 220 (e.g., the templates are permanently added to the memory and are not deleted or changed unless the device is reset and another enrollment process takes place). In some embodiments,static portion 222 includes a certain number ofstatic templates 216. For example, for the embodiment oftemplate space 220 depicted inFIG. 6 , six templates may be allowed instatic portion 222. After the enrollment process andstatic templates 216 are added tostatic portion 222, additionaldynamic templates 226 may be added todynamic portion 224 of template space 220 (e.g., a portion from which templates may be added and deleted without a device reset being needed). -
Static templates 216 may thus be enrollment templates (or reference templates) generated byenrollment process 200. Afterenrollment process 200 is completed, a selected number ofstatic templates 216 are stored instatic portion 222 oftemplate space 220. The number ofstatic templates 216 stored instatic portion 222 afterenrollment process 200 may vary depending on, for example, the number of different feature vectors obtained during the enrollment process, which may be based on the number of images selected to be suitable for use as template images, or a desired number of templates for the device. Afterenrollment process 200,static templates 216 include feature vectors 210 (e.g., the enrollment or reference feature vectors) that can be used for facial recognition of the authorized user.Template space 220 may then be used in a facial recognition authentication process. -
FIG. 7 depicts a flowchart of an embodiment of facialrecognition authentication process 250.Process 250 may be used to authenticate a user as an authorized user ofdevice 100 using facial recognition of the user. Authentication of the authorized user may allow the user to access and use device 100 (e.g., unlock the device) and/or have access to a selected functionality of the device (e.g., unlocking a function of an application running on the device, payment systems (i.e., making a payment), access to personal data, expanded view of notifications, etc.). In certain embodiments,process 250 is used as a primary biometric authentication process for device 100 (after enrollment of the authorized user). In some embodiments,process 250 is used as an authentication process in addition to another authentication process (e.g., fingerprint authentication, another biometric authentication, passcode entry, password entry, and/or pattern entry). In some embodiments, another authentication process (e.g., passcode entry, pattern entry, other biometric authentication) may be used to accessdevice 100 if the user fails to be authenticated usingprocess 250. - In 252,
camera 102 captures an image of the face of the user attempting to be authenticated for access to device 100 (e.g., the camera captures an “unlock attempt” image of the user). It is to be understood that the unlock attempt image may be a single image of the face of the user (e.g., a single flood IR image or single depth map image) or the unlock attempt image may be a series of several images of the face of the user taken over a short period of time (e.g., one second or less). In some embodiments, the series of several images of the face of the user includes pairs of flood IR images and depth map images (e.g., pairs of consecutive flood IR and depth map images). In some implementations, the unlock attempt image may be a composite of several images of the user illuminated by the flood illuminator and the speckle pattern illuminator. -
Camera 102 may capture the unlock attempt image in response to a prompt by the user. For example, the unlock attempt image may be captured when the user attempts to access todevice 100 by pressing a button (e.g., a home button or virtual button) ondevice 100, by moving the device into a selected position relative to the user's face (e.g., the user moves the device such that the camera is pointed directly at the user's face), and/or by making a specific gesture or movement with respect to the device. It is to be understood that, as described herein, unlock attempt images may include either flood IR images or depth map images, or a combination thereof. Further, the unlock attempt images may be processed in association with their corresponding template (e.g., flood IR images with a template for flood IR enrollment images) independently or in combination as needed. - In 254, the unlock attempt image is encoded to define the facial features of the user as one or more feature vectors in the feature space. In some embodiments, one feature vector is defined for the unlock attempt image. In some embodiments, more than one feature vector is defined for the unlock attempt image. Unlock feature vector(s) 256 may be the output of the encoding of the unlock attempt image in 254.
- In certain embodiments, in 258, unlock feature vector(s) 256 are compared to feature vectors in the templates of
template space 220 to get matchingscore 260 for the unlock attempt image.Matching score 260 may be a score of the differences between feature vector(s) 256 and feature vectors in template space 220 (e.g., feature vectors instatic templates 216 and/or otherdynamic templates 226 added to the template space as described herein). The closer (e.g., the less distance or less differences) that feature vector(s) 256 and the feature vectors intemplate space 220 are, thehigher matching score 260 may be. For example, as shown inFIG. 5 ,feature vector 256A (open diamond) is closer to featurevectors 210 thanfeature vector 256B (open diamond)(e.g.,feature vector 256B is a further outlier thanfeature vector 256A). Thus,feature vector 256A would have a higher matching score thanfeature vector 256A. Asfeature vector 256B is further away fromfeature vectors 210 thanfeature vector 256A, the lower matching score forfeature vector 256B means less confidence that the face in the unlock attempt image associated withfeature vector 256B is the face of the authorized user fromenrollment process 200. - In some embodiments, comparing feature vector(s) 256 and templates from
template space 220 to get matchingscore 260 includes using one or more classifiers or a classification-enabled network to classify and evaluate the differences between feature vector(s) 256 and templates fromtemplate space 220. Examples of different classifiers that may be used include, but are not limited to, linear, piecewise linear, nonlinear classifiers, support vector machines, and neural network classifiers. In some embodiments, matchingscore 260 is assessed using distance scores between feature vector(s) 256 and templates fromtemplate space 220. - In 262, matching
score 260 is compared to unlockthreshold 264 fordevice 100.Unlock threshold 264 may represent a minimum difference (e.g., distance in the feature space) in features (as defined by feature vectors) between the face of the authorized user and the face of the user in the unlock attempt image thatdevice 100 requires in order to unlock the device (or unlock a feature on the device). For example, unlockthreshold 264 may be a threshold value that determines whether the unlock feature vectors (e.g., feature vectors 256) are similar enough (e.g., close enough) to the templates associated with the authorized user's face (e.g.,static templates 216 in template space 220). As further example, unlockthreshold 264 may be represented bycircle 265 infeature space 212, depicted inFIG. 5 . As shown inFIG. 5 ,feature vector 256A is insidecircle 265 and thus featurevector 256A would have matchingscore 260 aboveunlock threshold 264.Feature vector 256B, however, is outsidecircle 265 and thus featurevector 256B would have matchingscore 260 belowunlock threshold 264. In certain embodiments, unlockthreshold 264 is set during manufacturing and/or by the firmware ofdevice 100. In some embodiments, unlockthreshold 264 is updated (e.g., adjusted) bydevice 100 during operation of the device as described herein. - As shown in
FIG. 7 , in 262, if matchingscore 260 is above unlock threshold 264 (i.e., the user's face in the unlock attempt image substantially matches the face of the authorized user), the user in the unlock attempt image is authenticated as the authorized user ofdevice 100 and the device is unlocked in 266. In certain embodiments, afterdevice 100 is unlocked in 266, unlockfeature vectors 256 and matchingscore 260 are provided to firsttemplate update process 300, shown inFIG. 8 , which may add or replace templates intemplate space 220. In 262, if matchingscore 260 is below unlock threshold 264 (e.g., not equal to or above the unlock threshold), thendevice 100 is not unlocked in 268. It should be noted thatdevice 100 may be either locked or unlocked if matchingscore 260 is equal to unlockthreshold 264 depending on a desired setting for the unlock threshold (e.g., tighter or looser restrictions). Additionally, either option for an equal matching score comparison may be also applied as desired for other embodiments described herein. - In some embodiments, in 270, a number of unlock attempts is counted (e.g., the number of attempts to unlock
device 100 with a different unlock attempt image captured in 252). If the number of unlock attempts in 270 is below a selected value (e.g., a threshold), then process 250 may be run again with another unlock attempt image (e.g., a new image of the user is captured (e.g., a flood IR image or a depth map image)). In some implementations,device 100 automatically captures the new image of the user's face without prompting the user. In some embodiments, the user attempting to unlockdevice 100 may have additional image(s) of his/her face captured bycamera 102. - If the number of unlock attempts is above the selected value, then
device 100 is locked from further attempts to use facial authentication in 272. In some embodiments, when the device is locked in 272, an error message may be displayed (e.g., on display 108) indicating that facialrecognition authentication process 250 has failed and/or the desired operation ofdevice 100 is restricted or prevented from being performed.Device 100 may be locked from further attempts to use facial authentication in 272 for a specified period of time and/or until another authentication protocol is used to unlock the device. For example, passcode unlock 274 may be used to unlockdevice 100. Passcode unlock 274 may include using a passcode, a password, pattern entry, a different form of biometric authentication, or another authentication protocol to unlockdevice 100. In some embodiments, passcode unlock 274 includes providing a “use passcode/password/pattern” affordance that, when selected causes display of a passcode/password/pattern entry user interface, or a passcode/password/pattern entry user interface, or a “use fingerprint” prompt that, when displayed, prompts the user to place a finger on a fingerprint sensor for the device. In some embodiments, afterdevice 100 is unlocked using the passcode in 274, unlockfeature vectors 256 and matchingscore 260 are provided to secondtemplate update process 400, shown inFIG. 11 . -
FIG. 8 depicts a flowchart of an embodiment of firsttemplate update process 300.Process 300 may be used to update from template space 220 (shown inFIG. 6 ) with one or more additionaldynamic templates 226 based on feature vector(s) 256 fromprocess 250.Process 300 may be used to updatetemplate space 220 for gradual changes in the appearance of the authorized user. For example,process 300 may updatetemplate space 220 for gradual changes in hair (e.g., hair color, hair length, and/or hair style), weight gain, weight loss, changes in glasses worn, or small disfigurement changes (e.g., black eyes, scars, etc.).Update template space 220 for gradual changes allows the authorized user to continue to accessdevice 100 using facialrecognition authentication process 250. -
Process 300 may begin by assessing 302 if matchingscore 260 is abovethreshold 304.Threshold 304 may be a threshold score for determining if feature vector(s) 256 are similar (e.g., close) enough to feature vectors 210 (from static templates 216) that feature vector(s) 256 may potentially be used as another template (e.g., the threshold score may determine iffeature vectors 256 are within a certain distance of feature vectors 210). In certain embodiments,threshold 304 is greater than unlock threshold 264 (e.g.,threshold 304 requires a higher matching score than unlock threshold 264). Thus, the threshold for feature vector(s) 256 becoming a template may be stricter than the threshold for unlocking the device.Threshold 304 may be set during manufacturing and/or by the firmware ofdevice 100.Threshold 304 may be updated (e.g., adjusted) bydevice 100 during operation of the device as described herein. - In some embodiments, if matching
score 260 is belowthreshold 304, then process 300 is stopped and feature vector(s) 256 are deleted fromdevice 100. In some embodiments, if matchingscore 260 is belowthreshold 304, then process 300 continues withtemplate update sub-process 300A, described inFIG. 10 . If matchingscore 260 is abovethreshold 304, then process 300 is continued. In some embodiments, after assessing 302, one or more qualities in the unlock attempt image are assessed in 306. For example, pose (e.g., pitch, yaw, and roll of the face), occlusion, attention, field of view, and/or distance in the unlock attempt image may be assessed in 306. Pose and/or occlusion in the unlock attempt image may be assessed using the landmark and/or occlusion maps described herein. In 308, if suitable qualifications are not met, then process 300 may be stopped. In certain embodiments, meeting suitable qualifications includes meeting selected criteria in the images for one or more of the assessed qualities described above. For example, selected criteria may include, but not be limited to, the face of the user being in the field of view of the camera, a pose of the user being proper (e.g., the user's face is not turned to far in any direction from the camera (i.e., the pitch, yaw, and/or roll of the face are not above certain levels) a distance to the face of the user being within a certain distance, the face of the user having occlusion below a minimum value (e.g., the user's face is not occluded (blocked) more than a minimum amount by another object), the user paying attention to the camera (e.g., eyes of the user looking at the camera), eyes of the user not being closed, and proper lighting (illumination) in the image. In some embodiments, assessing qualities in 306 and 308 may occur in a different location withinprocess 300. For example, assessing qualities in 306 and 308 may occur after comparing matchingscore 324 tothreshold 326 or after comparingconfidence score 332 to confidence score 334 in 336, described below. - If suitable qualifications are met in 308, then process 300 continues, in 310, with storing feature vector(s) 256 in a backup space in the memory of
device 100. The backup space in the memory may be, for example, a second space or temporary space in the memory that includes readable/writable memory and/or short term memory. Feature vector(s) 256 may be stored in the memory astemporary template 312. - In certain embodiments, after
temporary template 312 is stored in the backup space in the memory,process 300 continues by comparing the temporary template to feature vectors for additional unlock attempt images captured bydevice 100 for the authorized user. In 314, additional unlock attempt images are captured of the user (or users if unauthorized access is attempted) as the user(s) during additional (future) unlocking attempts ofdevice 100. The features of the face of the user in the additional unlock attempt images are encoded in 316 to generatefeature vectors 318. In 320,feature vectors 318 are compared totemporary template 312 to get matchingscore 322. -
Matching score 322 may then be compared in 324 tothreshold 326. In some embodiments,threshold 326 isunlock threshold 264. In some embodiments,threshold 326 isthreshold 304. If matchingscore 322 is abovethreshold 326 in 324, then a successful attempt is counted in 328. If matchingscore 322 is belowthreshold 326 in 324, then an unsuccessful attempt is counted in 330.Counts score 322 and threshold 326). Once the desired number of attempts is made, the number of successful attempts in 328 out of the total number of unlock attempts (e.g., the sum ofcounts 328 and 330) may be used to assess confidence score 332 fortemporary template 312. For example, there may be 45 successful attempts out of 50 total unlock attempts soconfidence score 332 is 45/50 or 90%.Confidence score 332 may be used to assess whether or nottemplate 312 is added asdynamic template 226 totemplate space 220, shown inFIG. 6 . - As described above, initially after enrollment, the enrollment templates (e.g.,
static templates 216, shown inFIG. 6 ) are added tostatic portion 222 oftemplate space 220. After the enrollment process and thestatic templates 216 are added tostatic portion 222,process 300, shown inFIG. 8 , may be used to add additional templates totemplate space 220. Additional templates may be added todynamic portion 224 as dynamic templates 226 (e.g., a portion from which templates may be added and deleted without a device reset being needed).Dynamic templates 226 may be used in combination withstatic templates 216 intemplate space 220 for facialrecognition authentication process 250, as shownFIG. 7 . - In certain embodiments,
temporary templates 312 generated byprocess 300, shown inFIG. 8 , are added todynamic portion 224 asdynamic templates 226, shown inFIG. 6 , when confidence score 332 fortemporary template 312 is higher than a lowest confidence score ofstatic templates 216 instatic portion 222.Confidence score 334 may be equal to a lowest confidence score forstatic templates 216 instatic portion 222 assessed during the same unlock attempts used to assess confidence score 332 for temporary template 312 (e.g., the confidence score for the template with the lowest number of successful unlock attempts during the same unlock attempts using temporary template 312).Confidence score 334 may be assessed using the same threshold used for confidence score 332 (e.g., threshold 326). - In certain embodiments, if, in 336,
confidence score 332 is greater thanconfidence score 334, thentemporary template 312 is added, in 338, asdynamic template 226 indynamic portion 224. For example, iftemporary template 312 has 45 successful unlock attempts out of 50 total unlock attempts while onestatic template 216 only has 40 successful unlock attempts out of the same 50 total unlock attempts, thentemporary template 312 may be added todynamic portion 224 as one ofdynamic templates 226. If, in 336,confidence score 332 is less thanconfidence score 334, thentemporary template 312 is ignored or deleted in 340.Temporary templates 312 may be added until a maximum number of alloweddynamic templates 226 are stored indynamic portion 224. - Once
dynamic portion 224 reaches its maximum number ofdynamic templates 226 indynamic portion 224,temporary template 312 may replace one ofdynamic templates 226 in 338. For example,temporary template 312 may replace one ofdynamic templates 226 if the temporary template is less of an outlier than one ofdynamic templates 226. In certain embodiments, statistical analysis of the feature vectors that representdynamic templates 226 andtemporary template 312 is used to assess iftemporary template 312 is less of an outlier than one ofdynamic templates 226. Statistical analysis may include, for example, classification algorithms operated on feature vectors for the templates. -
FIG. 9 depicts a representation of an embodiment oftemplate space 220 represented as a feature space. In the feature space depiction oftemplate space 220,static templates 216,dynamic templates 226, andtemporary template 312 are represented by feature vectors. For example,static templates 216 are represented by circles,dynamic templates 226 are represented by diamonds, andtemporary template 312 is represented by a star. In certain embodiments, as described above,static templates 216 are not allowed to be replaced bytemporary template 312. Thus, ifdynamic portion 224 has reached its maximum number ofdynamic templates 226,temporary template 312 may replace one ofdynamic templates 226 iftemporary template 312 is less of an outlier than one ofdynamic templates 226. - Statistical analysis of the feature vectors in the feature space correlating to
template space 220 may generate a circle (e.g., circle 342) that most closely defines a maximum number of the feature vectors. As shown inFIG. 9 ,circle 342 defines the feature vector fordynamic template 226′ as an outlier of the circle. The feature vector fordynamic template 226′ is more of an outlier than the feature vector fortemporary template 312. Thus,temporary template 312 may replacedynamic template 226′ intemplate space 220. Iftemporary template 312 had been more of an outlier than each ofdynamic templates 226, then the temporary template may not have replaced any one ofdynamic templates 226. - In certain embodiments, when
temporary template 312 replacesdynamic template 226′ intemplate space 220, one or more thresholds fordevice 100 may be recalculated. Astemporary template 312 is less of an outlier thandynamic template 226′ recalculation of the threshold(s) may further restrict the thresholds (e.g., raise the threshold for matching scores to require closer matching). In some embodiments, the unlock threshold (e.g., unlockthreshold 264, shown inFIG. 7 ) is made stricter whentemporary template 312 replacesdynamic template 226′ intemplate space 220. In some embodiments, a template update threshold (e.g.,threshold 304, shown inFIG. 8 ) is made stricter whentemporary template 312 replacesdynamic template 226′ intemplate space 220. -
FIG. 10 depicts a flowchart of an embodiment oftemplate update sub-process 300A. As described above, sub-process 300A may proceed if matchingscore 260 is belowthreshold 304 but aboveunlock threshold 264. Images with matchingscores 260 in such a range (aboveunlock threshold 264 and below threshold 304) may have more uncertainty in matching than images that are above threshold 304 (while still being able to unlock device 100). Thus, these more uncertain images may be processed using sub-process 300A. - In
sub-process 300A, one or more qualities in the unlock attempt image are assessed in 350. Assessing qualities of the unlock attempt image in 350 may be substantially similar assessing qualities in 306 and 308, as shown inFIG. 8 . As shown inFIG. 10 , if the unlock attempt images passes the assessment of qualities (e.g., meets qualifications) in 350, then a determination may be made in 352 if there is space (e.g., room) in the backup space used fortemporary templates 312 to store another temporary template (e.g., a determination if a maximum number oftemporary templates 312 are stored in the backup space). - If there is no room in the backup space (“N”), then the unlock attempt image (and its corresponding feature vectors) may be subject to delete
policy 354, as shown inFIG. 10 . Indelete policy 354, the feature vector(s) in the backup space (e.g., space for temporary templates 312) that has selected redundancy (e.g., is most redundant) to the existing features may be replaced in the backup space. - If there is room in the backup space (“Y”), then the feature vectors for the unlock attempt image are added to the backup space as a temporary template (e.g., temporary template 312) in 356. Once the temporary template from sub-process 300A is added to the backup space in 356, the temporary template may be processed substantially as temporary template 312 (e.g., compared to additional unlock attempt images as shown in
FIG. 8 ). In certain embodiments, the temporary template from sub-process 300A is used as a template (e.g.,temporary template 312 and/or dynamic template 226) for a selected amount of time. For example, because the temporary template from sub-process 300A is originally added with a higher uncertainty than other templates, the amount of time allowed for use of the temporary template from sub-process 300A may be limited (e.g., the temporary template has a limited lifetime). In some embodiments, the selected amount of time is a maximum amount of successful unlock attempts using the temporary template from sub-process 300A. - As described above, first
template update process 300 may be used to update a user's enrollment profile (e.g., templates in the template space) whendevice 100 is unlocked or accessed using facialauthentication recognition process 250. Firsttemplate update process 300 may be used, for example, to update a user's enrollment profile in response to gradual changes in the user's appearance (e.g., weight gain/loss). - In some embodiments, however, facial features of an authorized user (e.g., the user's facial appearance) may have changed drastically, or at least to a large enough extent, that the user may encounter difficulty unlocking or accessing features (e.g., operations) on
device 100 using facialauthentication recognition process 250, depicted inFIG. 7 . Drastic or large extent changes in the user's facial appearance may include, for example, shaving of a beard or mustache, getting a large scar or other disfigurement to the face, making drastic changes in makeup, making drastic hair changes. In some cases, the user may also encounter difficulty in unlocking/accessingdevice 100 using facialauthentication recognition process 250 if there was an error during the enrollment process and/or there are large differences between the user's environment during the unlock attempt and the time of enrollment. Encountering difficulty in unlockingdevice 100 using facialauthentication recognition process 250 may be a frustrating experience for the user. When difficulty in unlockingdevice 100 using facialauthentication recognition process 250 occurs due to the above described changes/issues, a second template update process (e.g., secondtemplate update process 400, described below) may be used to, at least temporarily, allow the user to unlock/access device using the facial authentication recognition process, despite the issues/changes, after verification of the user's identity using a second authentication protocol. - As shown in
FIG. 7 , the user may attempt a number of unlock attempts unsuccessfully using facialauthentication recognition process 250 until the number of unsuccessful unlock attempts reaches the selected value anddevice 100 is locked from further attempts to use the facial authentication recognition process. At such time, the user may be presented with one or more options for proceeding with a different type of authentication to unlock or access features on device 100 (e.g., the user is presented options for proceeding with a second authentication protocol). Presenting the options may include, for example, displaying one or more options ondisplay 108 ofdevice 100 and prompting the user through audible and/or visual communication to select one of the displayed options to proceed with unlocking the device or accessing features on the device. The user may then proceed with unlocking/accessingdevice 100 using the selected option and following additional audible and/or visual prompts as needed. After successfully being authenticated using the selected option, the user's initial request for unlocking/accessingdevice 100 may be granted. Additionally, after the user is successfully authenticated using the selected option,device 100 may, at least temporarily, update the user's enrollment profile (e.g., using secondtemplate update process 400 described below) to allow the user to be able to unlock/access the device in future unlock attempts using facialauthentication recognition process 250 despite the changes in the user's facial appearance that previously prevented the user from using the facial authentication recognition process to unlock/access the device. Thus, the user, by successfully completing authentication using the selected option, may automatically be able to accessdevice 100 using facialauthentication recognition process 250 in future unlock attempts for at least a short period of time. -
FIG. 11 depicts a flowchart of an embodiment of secondtemplate update process 400.Process 400 may be used when facialrecognition authentication process 250 is unable to unlockdevice 100 but the device is unlocked using a passcode or other authentication protocol, as shown inFIG. 7 . In some embodiments,process 400 may be used whendevice 100 is unlocked using the passcode immediately after the unlock attempt fails or within a specified time frame after the unlock attempt fails (e.g., in temporal proximity to the unlock attempt). In certain embodiments,process 400 is used to updatetemplate space 220 when facial features of the authorized user have changed to an extent that prevents feature vectors generated from an unlock attempt image (e.g., feature vectors 256) from being close enough (e.g., within the unlock threshold distance) tostatic templates 216 and/ordynamic templates 226 to allowdevice 100 to be unlocked using facialrecognition authentication process 250, shown inFIG. 7 . For example,process 400 may be used forfeature vector 256B, which is depicted outside circle 265 (the unlock threshold circle) inFIG. 5 . Possible causes for the user to be able to unlockdevice 100 using facialrecognition authentication process 250 include, but are not limited to, if the authorized user shaves a beard or mustache, gets a large scar or other disfigurement to the face, large changes in makeup, drastic hair change, or has another severe change in a facial feature, these changes may be immediate changes or “step changes” in the facial features of the authorized user that do not allow firsttemplate update process 300 to updatetemplate space 220 gradually over time. - Second
template update process 400 may begin by assessing 402 if matchingscore 260 is abovethreshold 404.Threshold 404 may be a threshold score for determining if feature vector(s) 256 are similar (e.g., close) enough to feature vectors 210 (from static templates 216) that feature vector(s) 256 may potentially be used as another template. In certain embodiments,threshold 404 forprocess 400 is belowunlock threshold 264.Threshold 404 may be below unlock threshold 264 (e.g., more distance allowed between feature vectors and the templates) because the passcode (or other authentication) has been entered prior tobeginning process 400. Thus, the threshold for feature vector(s) 256 becoming a template inprocess 400 may be less strict than the threshold for unlocking the device and the threshold forprocess 300, shown inFIG. 8 .Threshold 404 may, however, be set at a value that sets a maximum allowable distance betweenfeature vectors 256 for the unlock attempt image and feature vectors fortemplate space 220. Setting the maximum allowable distance may be used to prevent a user that is not the authorized user but has the passcode fordevice 100 to be enabled for facial recognition authentication on the device.Threshold 404 may be set during manufacturing and/or by the firmware ofdevice 100.Threshold 404 may be updated (e.g., adjusted) bydevice 100 during operation of the device as described herein (e.g., after templates are added or replaced in template space 220). -
Process 404 may be stopped and feature vector(s) 256 are deleted fromdevice 100 if matchingscore 260 is belowthreshold 404. If matchingscore 260 is abovethreshold 404, then process 400 is continued. In some embodiments, after assessing 404, one or more qualities in the unlock attempt image are assessed in 406. For example, pose (e.g., pitch, yaw, and roll of the face), occlusion, attention, field of view, and/or distance in the unlock attempt image may be assessed in 406. In some embodiments, pose and/or occlusion in the unlock attempt image are assessed using the landmark and/or occlusion maps described herein. In 408, if suitable qualifications (as described above) are not met, then process 400 may be stopped. - If suitable qualifications are met in 408, then process 400 continues in 410, with storing feature vector(s) 256 in a backup space in the memory of
device 100. The backup space in the memory forprocess 400 may be a different backup space than used forprocess 300. For example, the backup space in the memory forprocess 400 may be a temporary space in the memory that includes readable/writable memory partitioned from backup space used forprocess 300. Feature vector(s) 256 may be stored in the memory astemporary template 412. - In certain embodiments, after
temporary template 412 is stored in the backup space,temporary template 412 may be compared to feature vectors for additional images from failed facial recognition authentication unlock attempts ofdevice 100. For example, inprocess 400 additional unlock failed attempt images may be captured in 414. If the correct passcode is entered in 416, then feature vectors for the images captured in 414 may be encoded in 418 to generatefeature vectors 420. - In certain embodiments, in 422,
feature vectors 420 are compared to the feature vector(s) fortemporary template 412. Comparison offeature vectors 420 and the feature vector(s) fortemporary template 412 may provide matchingscore 424.Matching score 424 may be compared in 426 tothreshold 428.Threshold 428 may be, for example, a similarity threshold or a threshold that defines at least a minimum level of matching between the feature vector(s) fortemporary template 412 andfeature vectors 420 obtained from the additional images from failed facial recognition authentication attempts that are followed by entering of the passcode fordevice 100. Thus,threshold 428 may be set at a value that ensures at least a minimum amount of probability that the change in the user's features that caused the failed unlock attempt and generatedtemporary template 412 is still present in the images from additional failed unlock attempts using facial recognition authentication. - If matching
score 424 is abovethreshold 428 in 426, then a successful match is counted in 430. If matchingscore 424 is belowthreshold 428 in 426, then an unsuccessful match is counted in 432.Counts score 424 and threshold 428). Once the desired number of attempts is made, the number of successful matches in 430 out of the total number of failed unlock attempts (e.g., the sum ofcounts 430 and 432) may be used to assess confidence score 434 fortemporary template 412. For example, there may be 18 successful matches (e.g., comparisons) of matchingscore 424 andthreshold 428 out of 20 total failed unlock attempts.Confidence score 434 may be used to assess whether or nottemplate 412 is added asdynamic template 226 totemplate space 220, shown inFIG. 6 . - In some embodiments, it may be assumed that if a step change occurs in the facial features of the authorized user, the step change may remain for a number of successive unlock attempts using facial recognition authentication. For example, if the user shaved a beard, then the step change should remain for at least some length of time (e.g., at least a week). In such embodiments, if a successful unlock attempt (or a desired number of successful unlock attempts) using facial recognition authentication occurs before a selected number of successive unlock attempts is reached (e.g., 10 or 15 unlock attempts), then
temporary template 412 may be deleted from the backup space in the memory. In some embodiments, the assumption that the step change may remain for a number of successive unlock attempts may not apply (e.g., if the user's step change was due to temporary application of makeup). - In certain embodiments, in 436,
confidence score 434 is compared againstthreshold 438 to assess if the confidence score is greater than the threshold.Threshold 438 may be a threshold selected to ensure a minimum number of successful comparisons of matchingscore 424 andthreshold 428 are reached before allowingtemplate 412 to be added totemplate space 220. In 436, if confidence score 434 is greater thanthreshold 438, then, in 440,temporary template 412 may be added totemplate space 220 ortemporary template 412 may replace a template in the template space 220 (e.g., replace one of dynamic templates 226). Ifconfidence score 434 is less thanthreshold 438, thentemporary template 412 may be ignored or deleted in 442. - As described above,
temporary template 412 generated byprocess 400 may be added todynamic portion 224 oftemplate space 220 as one ofdynamic templates 226, shown inFIG. 6 . Forprocess 400, shown inFIG. 11 , the passcode (or other authentication) has been used to verify thattemporary template 412 is for the authorized user. Thus, in certain embodiments,temporary template 412 is added totemplate space 220 in 440 without a need for comparison todynamic templates 226 already indynamic portion 224. If the maximum number of alloweddynamic templates 226 indynamic portion 224 has not been reached, thentemporary template 412 is added to the dynamic portion as one ofdynamic templates 226. - If the maximum number of allowed
dynamic templates 226 indynamic portion 224 has been reached, thentemporary template 412 may replace one ofdynamic templates 226 in the dynamic portion. As the passcode (or other authentication) has been used to verifytemporary template 412 is for the authorized user, the temporary template may replace one ofdynamic templates 226 indynamic portion 224 even if the temporary template is more of an outlier than each ofdynamic templates 226. In certain embodiments,temporary template 412 replaces the largest outlier ofdynamic templates 226 regardless of the relative lie (e.g., outlie) of the temporary template. In some embodiments,temporary template 412 may replace a dynamic template that is redundant (e.g., most redundant) to the existing dynamic templates even if the temporary template is more of an outlier than each of the dynamic templates. -
FIG. 12 depicts a representation of an embodiment oftemplate space 220 represented as a feature space with a feature vector fortemporary template 412. In the feature space depiction oftemplate space 220 inFIG. 12 ,static templates 216,dynamic templates 226, andtemporary template 412 are represented by feature vectors.Static templates 216 are represented by circles,dynamic templates 226 are represented by diamonds, andtemporary template 412 is represented by a star. As described above,static templates 216 may not be replaced bytemporary template 412. Thus, ifdynamic portion 224 has reached its maximum number ofdynamic templates 226,temporary template 412 may replace one ofdynamic templates 226. - Statistical analysis of the feature vectors in the feature space correlating to
template space 220 may generate a circle (e.g., circle 444) that most closely defines a maximum number of the feature vectors. As shown inFIG. 12 , the feature vector fordynamic template 226′ is the largest outlier of each of the feature vectors fordynamic templates 226. Thus,temporary template 412 may replacedynamic template 226′ intemplate space 220 regardless of the position of the feature vector for the temporary template. In the example depicted inFIG. 12 , the addition of the feature vector fortemporary template 412 shiftscircle 444 towards the feature vector fortemporary template 412 and may cause the feature vector fordynamic template 226′ to become the largest outlier of the circle. In some embodiments, whentemporary template 412 replacesdynamic template 226′ intemplate space 220, one or more thresholds fordevice 100 may be recalculated. - In some embodiments, a temporary template (e.g., either
temporary template 312 or temporary template 412) may be used to unlockdevice 100 for a selected period of time while the temporary template is in the backup space of the memory (e.g., before the temporary template is added to template space 220). The temporary template may be used to unlockdevice 100 after the passcode (or other user authentication protocol) is used in combination with the temporary template. For example, fortemporary template 412, the passcode has been entered to unlockdevice 100 beforetemporary template 412 is generated and stored in the backup space of the device memory.Temporary template 412 may then be used to allow unlocking ofdevice 100 using facial recognition authentication for a selected time period (e.g., a few days or a week). After the selected time period expires, iftemporary template 412 has not been added totemplate space 220, the user may be prompted for the passcode if facial recognition authentication of the user fails. - In some embodiments, multiple enrollment profiles are generated on
device 100. Multiple enrollment profiles may be generated, for example, to enroll multiple users ondevice 100 and/or to enroll multiple looks for a single user. Multiple looks for a single user may include looks that are substantially different and cannot be recognized using a single enrollment profile (e.g., user wears lots of makeup or has other drastic changes at different times of day/week). For example, a single user can execute the enrollment process a first time to create first enrollment profile while wearing glasses and execute the enrollment process a second time to create a second enrollment profile while not wearing glasses. - In embodiments with multiple enrollment profiles,
image enrollment process 200 may be used to generate each enrollment profiles as a separate enrollment profile ondevice 100. For example,process 200 may be used to create separate templates of enrollment images for each enrollment profile. The separate templates may be stored in different portions of the memory of device 100 (e.g., partitioned portions of the memory space used for storing the templates). - With multiple enrollment profiles stored in
device 100, facialrecognition authentication process 250 may compare features in unlock attempt images to each of the different profiles (e.g., all the templates stored in memory). In certain embodiments, if a match is determined for any one of the enrollment profiles (e.g., the matching score is above the unlock threshold), thendevice 100 is unlocked. In some embodiments, if multiple enrollment profiles are stored ondevice 100, the unlock threshold is increased (e.g., the requirement for matching is made more strict). In some embodiments, when a new enrollment profile is generated, the amount unlock threshold is increased is based on the distance in feature space between the feature vectors associated with the templates for the new enrollment profile and the feature vectors associated with templates in existing enrollment profile(s) (e.g., the more distance there is between feature vectors in the template for the new enrollment profile and feature vectors in existing enrollment profiles, the more the unlock threshold is increased). In some embodiments, the new unlock threshold may also be adjusted based on a match history of the existing enrollment profiles (e.g., the more matches in the history of the existing profiles, the more strict the threshold may be). - When multiple enrollment profiles are stored in
device 100, each enrollment profile may be associated with its own template update processes (e.g., each enrollment profile operates with its own firsttemplate update process 300 and second template update process 400). In embodiments whendevice 100 is unlocked with a match determined using facialrecognition authentication process 250, the enrollment profile that is matched with the unlock attempt image inprocess 250 may be processed (e.g., updated) using its corresponding firsttemplate update process 300. If multiple enrollment profiles are determined to match with the unlock attempt image using facialrecognition authentication process 250, then each of the matching enrollment profiles may be processed (e.g., updated) using its respective firsttemplate update process 300. In embodiments whendevice 100 is unlocked using a passcode (or another secondary authentication method) because facialrecognition authentication process 250 could not determine a match, the enrollment profile that has feature vectors closest (e.g., least distance) to the feature vectors of the unlock attempt image may be processed (e.g., updated) using its corresponding secondtemplate update process 400. - In certain embodiments, one or more process steps described herein may be performed by one or more processors (e.g., a computer processor) executing instructions stored on a non-transitory computer-readable medium. For example,
process 200,process 250,process 300, andprocess 400, shown inFIGS. 4, 7, 8, and 11 , may have one or more steps performed by one or more processors executing instructions stored as program instructions in a computer readable storage medium (e.g., a non-transitory computer readable storage medium). -
FIG. 13 depicts a block diagram of one embodiment ofexemplary computer system 510.Exemplary computer system 510 may be used to implement one or more embodiments described herein. In some embodiments,computer system 510 is operable by a user to implement one or more embodiments described herein such asprocess 200,process 250,process 300, andprocess 400, shown inFIGS. 4, 7, 8, and 11 . In the embodiment ofFIG. 13 ,computer system 510 includesprocessor 512,memory 514, and variousperipheral devices 516.Processor 512 is coupled tomemory 514 andperipheral devices 516.Processor 512 is configured to execute instructions, including the instructions forprocess 200,process 250,process 300, and/orprocess 400, which may be in software. In various embodiments,processor 512 may implement any desired instruction set (e.g. Intel Architecture-32 (IA-32, also known as x86), IA-32 with 64 bit extensions, x86-64, PowerPC, Sparc, MIPS, ARM, IA-64, etc.). In some embodiments,computer system 510 may include more than one processor. Moreover,processor 512 may include one or more processors or one or more processor cores. -
Processor 512 may be coupled tomemory 514 andperipheral devices 516 in any desired fashion. For example, in some embodiments,processor 512 may be coupled tomemory 514 and/orperipheral devices 516 via various interconnect. Alternatively or in addition, one or more bridge chips may be used to coupledprocessor 512,memory 514, andperipheral devices 516. -
Memory 514 may comprise any type of memory system. For example,memory 514 may comprise DRAM, and more particularly double data rate (DDR) SDRAM, RDRAM, etc. A memory controller may be included to interface tomemory 514, and/orprocessor 512 may include a memory controller.Memory 514 may store the instructions to be executed byprocessor 512 during use, data to be operated upon by the processor during use, etc. -
Peripheral devices 516 may represent any sort of hardware devices that may be included incomputer system 510 or coupled thereto (e.g., storage devices, optionally including computeraccessible storage medium 600, shown inFIG. 14 , other input/output (I/O) devices such as video hardware, audio hardware, user interface devices, networking hardware, etc.). - Turning now to
FIG. 14 , a block diagram of one embodiment of computeraccessible storage medium 600 including one or more data structures representative of device 100 (depicted inFIG. 1 ) included in an integrated circuit design and one or more code sequences representative ofprocess 200,process 250,process 300, and/or process 400 (shown inFIGS. 4, 7, 8, and 11 ). Each code sequence may include one or more instructions, which when executed by a processor in a computer, implement the operations described for the corresponding code sequence. Generally speaking, a computer accessible storage medium may include any storage media accessible by a computer during use to provide instructions and/or data to the computer. For example, a computer accessible storage medium may include non-transitory storage media such as magnetic or optical media, e.g., disk (fixed or removable), tape, CD-ROM, DVD-ROM, CD-R, CD-RW, DVD-R, DVD-RW, or Blu-Ray. Storage media may further include volatile or non-volatile memory media such as RAM (e.g. synchronous dynamic RAM (SDRAM), Rambus DRAM (RDRAM), static RAM (SRAM), etc.), ROM, or Flash memory. The storage media may be physically included within the computer to which the storage media provides instructions/data. Alternatively, the storage media may be connected to the computer. For example, the storage media may be connected to the computer over a network or wireless link, such as network attached storage. The storage media may be connected through a peripheral interface such as the Universal Serial Bus (USB). Generally, computeraccessible storage medium 600 may store data in a non-transitory manner, where non-transitory in this context may refer to not transmitting the instructions/data on a signal. For example, non-transitory storage may be volatile (and may lose the stored instructions/data in response to a power down) or non-volatile. - Further modifications and alternative embodiments of various aspects of the embodiments described in this disclosure will be apparent to those skilled in the art in view of this description. Accordingly, this description is to be construed as illustrative only and is for the purpose of teaching those skilled in the art the general manner of carrying out the embodiments. It is to be understood that the forms of the embodiments shown and described herein are to be taken as the presently preferred embodiments. Elements and materials may be substituted for those illustrated and described herein, parts and processes may be reversed, and certain features of the embodiments may be utilized independently, all as would be apparent to one skilled in the art after having the benefit of this description. Changes may be made in the elements described herein without departing from the spirit and scope of the following claims.
Claims (21)
1-20. (canceled)
21. A method, comprising:
obtaining an image of a face of a user using a camera located on a device in response to an authentication attempt initiated by the user, the device comprising a computer processor and a memory;
encoding the image to generate a first feature vector representing one or more features of the user in the image;
authorizing the user in the image as an authenticated user of the device based on a matching score between the first feature vector and one or more reference templates stored in the memory being above a first threshold, wherein the one or more reference templates include at least one static template; and
storing the first feature vector as a dynamic template in the memory in response to the matching score being above a second threshold, wherein:
when a number of dynamic templates stored in the memory is below a predetermined number of dynamic templates allowed in the memory, the first feature vector is stored as a new dynamic template in the memory; and
when the number of dynamic templates stored in the memory is at the predetermined number of dynamic templates allowed in the memory, the first feature vector replaces an existing dynamic template in the memory when the first feature vector is a closer match to the one or more reference templates and existing dynamic templates in the memory as compared to the existing dynamic template being replaced.
22. The method of claim 21 , wherein the second threshold is higher than the first threshold.
23. The method of claim 21 , wherein the first feature vector is a closer match when the first feature vector is less of an outlier than the existing temporary dynamic template being replaced.
24. The method of claim 23 , further comprising determining whether the first feature vector is less of the outlier based on statistical analysis of the first feature vector, the one or more reference templates, and the existing dynamic templates.
25. The method of claim 21 , further comprising:
obtaining one or more reference images of the face of the user using the camera;
encoding facial features of the user from the one or more reference images to generate one or more reference feature vectors; and
storing the one or more reference feature vectors as the one or more reference templates in the memory of the device.
26. The method of claim 21 , wherein the at least one static template includes a template that is not deleted or changed in the memory of the device until the device is reset or a user determines to replace the one or more reference templates in the memory of the device.
27. The method of claim 21 , further comprising:
obtaining one or more additional images using the camera in response to additional authentication attempts initiated on the device;
storing the first feature vector as the dynamic template in the memory in response to the matching score being above the second threshold and the first feature vector meeting at least one criteria for the additional images in comparison to the at least one static template.
28. The method of claim 27 , further comprising:
encoding the additional images to generate second feature vectors representing one or more features of the user in the additional image;
generating a first set of matching scores between the second feature vectors and the first feature vector; and
generating a second set of matching scores between the second features vectors and the at least one static template;
wherein the at least one criteria includes a number of matching scores above the first threshold in the first set of matching scores being greater than a number of matching scores above the first threshold in the second set of matching scores.
29. A device, comprising:
a camera;
a processor coupled to the camera;
a memory coupled to the processor; and
circuitry coupled to the camera, wherein the circuitry is configured to:
obtain an image of a face of a user using the camera in response to an authentication attempt initiated by the user;
encode the image to generate a first feature vector representing one or more features of the user in the image;
authorize the user in the image as an authenticated user of the device based on a matching score between the first feature vector and one or more reference templates stored in the memory being above a first threshold, wherein the one or more reference templates include at least one static template; and
store the first feature vector as a dynamic template in the memory in response to the matching score being above a second threshold, wherein:
when a number of dynamic templates stored in the memory is below a predetermined number of dynamic templates allowed in the memory, the first feature vector is stored as a new dynamic template in the memory; and
when the number of dynamic templates stored in the memory is at the predetermined number of dynamic templates allowed in the memory, the first feature vector replaces an existing dynamic template in the memory when the first feature vector is a closer match to the one or more reference templates and existing dynamic templates in the memory as compared to the existing dynamic template being replaced.
30. The device of claim 29 , wherein the device further comprises at least one illuminator configured to provide infrared illumination directed towards the user when obtaining the image of the face of the user using the camera.
31. The device of claim 29 , wherein the camera includes an infrared sensor.
32. The device of claim 29 , wherein the second threshold is higher than the first threshold.
33. The device of claim 29 , wherein the circuitry is configured to enroll the authenticated user on the device in response to the authenticated user providing at least one additional authentication protocol on the device, and wherein the circuitry is configured to enroll the authenticated user by:
obtaining one or more reference images of the face of the user using the camera;
encoding facial features of the user from the one or more reference images to generate one or more reference feature vectors; and
storing the one or more reference feature vectors as the one or more reference templates in the memory of the device.
34. The device of claim 33 , wherein the at least one additional authentication protocol comprises entering a passcode for the authenticated user.
35. The device of claim 29 , wherein the circuitry is configured to update the first threshold when the first feature vector is stored as the dynamic template in the memory.
36. The device of claim 29 , wherein the circuitry is configured to:
obtain one or more additional images using the camera in response to additional authentication attempts initiated on the device;
store the first feature vector as the dynamic template in the memory in response to the matching score being above the second threshold and the first feature vector meeting at least one criteria for the additional images in comparison to the at least one static template.
37. A non-transitory computer-readable medium having instructions stored thereon that are executable by a computing device to perform operations, comprising:
obtaining an image of a face of a user using a camera located on the computing device in response to an authentication attempt initiated by the user;
encoding the image to generate a first feature vector representing one or more features of the user in the image;
authorizing the user in the image as an authenticated user of the device based on a matching score between the first feature vector and one or more reference templates stored in a memory of the computing device being above a first threshold, wherein the one or more reference templates include at least one static template; and
storing the first feature vector as a dynamic template in the memory in response to the matching score being above a second threshold, wherein:
when a number of dynamic templates stored in the memory is below a predetermined number of dynamic templates allowed in the memory, the first feature vector is stored as a new dynamic template in the memory; and
when the number of dynamic templates stored in the memory is at the predetermined number of dynamic templates allowed in the memory, the first feature vector replaces an existing dynamic template in the memory when the first feature vector is a closer match to the one or more reference templates and existing dynamic templates in the memory as compared to the existing dynamic template being replaced.
38. The non-transitory computer-readable medium of claim 37 , wherein the image obtained includes an image of the user captured while illuminating the user with flood infrared illumination.
39. The non-transitory computer-readable medium of claim 37 , wherein the image obtained includes an image of the user captured while illuminating the user with patterned infrared illumination.
40. The non-transitory computer-readable medium of claim 37 , wherein authorizing the user in the image as the authenticated user of the device allows the authenticated user to have access to a selected functionality of the device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/708,770 US20200285875A1 (en) | 2017-08-01 | 2019-12-10 | Process for updating templates used in facial recognition |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762539739P | 2017-08-01 | 2017-08-01 | |
US201762556850P | 2017-09-11 | 2017-09-11 | |
US15/881,261 US10503992B2 (en) | 2017-08-01 | 2018-01-26 | Process for updating templates used in facial recognition |
US16/708,770 US20200285875A1 (en) | 2017-08-01 | 2019-12-10 | Process for updating templates used in facial recognition |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/881,261 Continuation US10503992B2 (en) | 2017-08-01 | 2018-01-26 | Process for updating templates used in facial recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200285875A1 true US20200285875A1 (en) | 2020-09-10 |
Family
ID=61193106
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/881,261 Active US10503992B2 (en) | 2017-08-01 | 2018-01-26 | Process for updating templates used in facial recognition |
US16/708,770 Abandoned US20200285875A1 (en) | 2017-08-01 | 2019-12-10 | Process for updating templates used in facial recognition |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/881,261 Active US10503992B2 (en) | 2017-08-01 | 2018-01-26 | Process for updating templates used in facial recognition |
Country Status (4)
Country | Link |
---|---|
US (2) | US10503992B2 (en) |
KR (1) | KR102362651B1 (en) |
CN (2) | CN109325327B (en) |
WO (1) | WO2019027504A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11100204B2 (en) * | 2018-07-19 | 2021-08-24 | Motorola Mobility Llc | Methods and devices for granting increasing operational access with increasing authentication factors |
TWI806030B (en) * | 2021-03-31 | 2023-06-21 | 瑞昱半導體股份有限公司 | Processing circuit and processing method applied to face recognition system |
RU2823903C1 (en) * | 2023-10-30 | 2024-07-30 | Самсунг Электроникс Ко., Лтд. | Methods of registering and updating biometric template of user using information on orientation of face of user and corresponding computer devices and data media |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10832130B2 (en) * | 2016-09-19 | 2020-11-10 | Google Llc | Recommending a document for a user to access |
US10776467B2 (en) | 2017-09-27 | 2020-09-15 | International Business Machines Corporation | Establishing personal identity using real time contextual data |
US10803297B2 (en) | 2017-09-27 | 2020-10-13 | International Business Machines Corporation | Determining quality of images for user identification |
US10839003B2 (en) | 2017-09-27 | 2020-11-17 | International Business Machines Corporation | Passively managed loyalty program using customer images and behaviors |
US10795979B2 (en) | 2017-09-27 | 2020-10-06 | International Business Machines Corporation | Establishing personal identity and user behavior based on identity patterns |
US10565432B2 (en) * | 2017-11-29 | 2020-02-18 | International Business Machines Corporation | Establishing personal identity based on multiple sub-optimal images |
CN108875533B (en) * | 2018-01-29 | 2021-03-05 | 北京旷视科技有限公司 | Face recognition method, device, system and computer storage medium |
KR20190098795A (en) * | 2018-01-30 | 2019-08-23 | 엘지전자 주식회사 | Vehicle device and control meghod of transportation system comprising thereof |
CN108573038A (en) * | 2018-04-04 | 2018-09-25 | 北京市商汤科技开发有限公司 | Image procossing, auth method, device, electronic equipment and storage medium |
JP2019204288A (en) * | 2018-05-23 | 2019-11-28 | 富士通株式会社 | Biometric authentication device, biometric authentication method and biometric authentication program |
US10747989B2 (en) * | 2018-08-21 | 2020-08-18 | Software Ag | Systems and/or methods for accelerating facial feature vector matching with supervised machine learning |
US11216541B2 (en) * | 2018-09-07 | 2022-01-04 | Qualcomm Incorporated | User adaptation for biometric authentication |
CN109977871B (en) * | 2019-03-27 | 2021-01-29 | 中国人民解放军战略支援部队航天工程大学 | Satellite target identification method based on broadband radar data and GRU neural network |
CN110009708B (en) * | 2019-04-10 | 2020-08-28 | 上海大学 | Color development transformation method, system and terminal based on image color segmentation |
CN110268419A (en) * | 2019-05-08 | 2019-09-20 | 深圳市汇顶科技股份有限公司 | A kind of face identification method, face identification device and computer readable storage medium |
CN110728783A (en) * | 2019-08-31 | 2020-01-24 | 苏州浪潮智能科技有限公司 | Self-correction method, system and equipment of face recognition system |
WO2021050042A1 (en) * | 2019-09-09 | 2021-03-18 | Google Llc | Face authentication embedding migration and drift-compensation |
US11687635B2 (en) | 2019-09-25 | 2023-06-27 | Google PLLC | Automatic exposure and gain control for face authentication |
CN113544692B (en) | 2019-10-10 | 2024-09-06 | 谷歌有限责任公司 | Camera synchronization and image tagging for facial authentication |
CN113449544A (en) * | 2020-03-24 | 2021-09-28 | 华为技术有限公司 | Image processing method and system |
US11520871B2 (en) * | 2020-09-09 | 2022-12-06 | International Business Machines Corporation | Authentication with face covering |
US11983965B2 (en) | 2020-11-05 | 2024-05-14 | Samsung Electronics Co., Ltd. | Electronic device for biometric authentication and method for operating the same |
US20230098230A1 (en) * | 2021-09-28 | 2023-03-30 | Himax Technologies Limited | Object detection system |
CN118644693A (en) * | 2024-08-12 | 2024-09-13 | 杭州萤石软件有限公司 | Information updating method, apparatus, device, storage medium and program product |
Family Cites Families (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4314016B2 (en) * | 2002-11-01 | 2009-08-12 | 株式会社東芝 | Person recognition device and traffic control device |
CN1627317A (en) * | 2003-12-12 | 2005-06-15 | 北京阳光奥森科技有限公司 | Method for obtaining image of human faces by using active light source |
US7734067B2 (en) | 2004-12-07 | 2010-06-08 | Electronics And Telecommunications Research Institute | User recognition system and method thereof |
US8150142B2 (en) | 2007-04-02 | 2012-04-03 | Prime Sense Ltd. | Depth mapping using projected patterns |
US8384997B2 (en) | 2008-01-21 | 2013-02-26 | Primesense Ltd | Optical pattern projection |
US8180112B2 (en) * | 2008-01-21 | 2012-05-15 | Eastman Kodak Company | Enabling persistent recognition of individuals in images |
KR101082842B1 (en) * | 2008-12-10 | 2011-11-11 | 한국전자통신연구원 | Face recognition method by using face image and apparatus thereof |
JP5245971B2 (en) * | 2009-03-26 | 2013-07-24 | 富士通株式会社 | Biological information processing apparatus and method |
US20130163833A1 (en) * | 2010-09-17 | 2013-06-27 | Utc Fire & Security Corporation | Security device with security image update capability |
US10054430B2 (en) | 2011-08-09 | 2018-08-21 | Apple Inc. | Overlapping pattern projector |
US8749796B2 (en) | 2011-08-09 | 2014-06-10 | Primesense Ltd. | Projectors of structured light |
US8913839B2 (en) | 2011-09-27 | 2014-12-16 | University Of North Carolina At Wilmington | Demographic analysis of facial landmarks |
JP2013077068A (en) * | 2011-09-29 | 2013-04-25 | Sogo Keibi Hosho Co Ltd | Face authentication database management method, face authentication database management device, and face authentication database management program |
US9177130B2 (en) | 2012-03-15 | 2015-11-03 | Google Inc. | Facial feature detection |
CN103324904A (en) * | 2012-03-20 | 2013-09-25 | 凹凸电子(武汉)有限公司 | Face recognition system and method thereof |
US9268991B2 (en) * | 2012-03-27 | 2016-02-23 | Synaptics Incorporated | Method of and system for enrolling and matching biometric data |
US20150013949A1 (en) | 2013-04-19 | 2015-01-15 | Roger Arnot | Heat-exchange apparatus for insertion into a storage tank, and mounting components therefor |
US9928355B2 (en) | 2013-09-09 | 2018-03-27 | Apple Inc. | Background enrollment and authentication of a user |
JP5902661B2 (en) | 2013-09-30 | 2016-04-13 | 株式会社東芝 | Authentication apparatus, authentication system, and authentication method |
US9576126B2 (en) | 2014-02-13 | 2017-02-21 | Apple Inc. | Updating a template for a biometric recognition device |
CN103870805B (en) * | 2014-02-17 | 2017-08-15 | 北京释码大华科技有限公司 | A kind of mobile terminal biological characteristic imaging method and device |
US9292728B2 (en) | 2014-05-30 | 2016-03-22 | Apple Inc. | Electronic device for reallocating finger biometric template nodes in a set memory space and related methods |
US9230152B2 (en) | 2014-06-03 | 2016-01-05 | Apple Inc. | Electronic device for processing composite finger matching biometric data and related methods |
JP2016081249A (en) * | 2014-10-15 | 2016-05-16 | 株式会社ソニー・コンピュータエンタテインメント | Information processing device and information processing method |
US20160125223A1 (en) | 2014-10-30 | 2016-05-05 | Apple Inc. | Electronic device including multiple speed and multiple accuracy finger biometric matching and related methods |
RU2691195C1 (en) | 2015-09-11 | 2019-06-11 | Айверифай Инк. | Image and attribute quality, image enhancement and identification of features for identification by vessels and individuals, and combining information on eye vessels with information on faces and/or parts of faces for biometric systems |
US10769255B2 (en) * | 2015-11-11 | 2020-09-08 | Samsung Electronics Co., Ltd. | Methods and apparatuses for adaptively updating enrollment database for user authentication |
US10192103B2 (en) | 2016-01-15 | 2019-01-29 | Stereovision Imaging, Inc. | System and method for detecting and removing occlusions in a three-dimensional image |
-
2018
- 2018-01-26 CN CN201810079434.3A patent/CN109325327B/en active Active
- 2018-01-26 US US15/881,261 patent/US10503992B2/en active Active
- 2018-01-26 KR KR1020207005027A patent/KR102362651B1/en active IP Right Grant
- 2018-01-26 CN CN201820136799.0U patent/CN212846789U/en active Active
- 2018-01-26 WO PCT/US2018/015511 patent/WO2019027504A1/en active Application Filing
-
2019
- 2019-12-10 US US16/708,770 patent/US20200285875A1/en not_active Abandoned
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11100204B2 (en) * | 2018-07-19 | 2021-08-24 | Motorola Mobility Llc | Methods and devices for granting increasing operational access with increasing authentication factors |
TWI806030B (en) * | 2021-03-31 | 2023-06-21 | 瑞昱半導體股份有限公司 | Processing circuit and processing method applied to face recognition system |
RU2823903C1 (en) * | 2023-10-30 | 2024-07-30 | Самсунг Электроникс Ко., Лтд. | Methods of registering and updating biometric template of user using information on orientation of face of user and corresponding computer devices and data media |
Also Published As
Publication number | Publication date |
---|---|
US10503992B2 (en) | 2019-12-10 |
WO2019027504A1 (en) | 2019-02-07 |
CN109325327A (en) | 2019-02-12 |
KR102362651B1 (en) | 2022-02-14 |
CN109325327B (en) | 2021-08-03 |
US20190042866A1 (en) | 2019-02-07 |
KR20200032161A (en) | 2020-03-25 |
CN212846789U (en) | 2021-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10503992B2 (en) | Process for updating templates used in facial recognition | |
US10430645B2 (en) | Facial recognition operations based on pose | |
US11693937B2 (en) | Automatic retries for facial recognition | |
US11163981B2 (en) | Periocular facial recognition switching | |
US10719692B2 (en) | Vein matching for difficult biometric authentication cases | |
US11113510B1 (en) | Virtual templates for facial recognition | |
US11367305B2 (en) | Obstruction detection during facial recognition processes | |
US20210133428A1 (en) | Occlusion detection for facial recognition processes | |
US10769415B1 (en) | Detection of identity changes during facial recognition enrollment process | |
US10990805B2 (en) | Hybrid mode illumination for facial recognition authentication | |
US11935327B1 (en) | On the fly enrollment for facial recognition | |
KR102564951B1 (en) | Multiple Registrations of Face Recognition | |
AU2020100218A4 (en) | Process for updating templates used in facial recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |