WO2019173562A1 - Systèmes et procédés de traitement biométrique respectant la confidentialité - Google Patents

Systèmes et procédés de traitement biométrique respectant la confidentialité Download PDF

Info

Publication number
WO2019173562A1
WO2019173562A1 PCT/US2019/021100 US2019021100W WO2019173562A1 WO 2019173562 A1 WO2019173562 A1 WO 2019173562A1 US 2019021100 W US2019021100 W US 2019021100W WO 2019173562 A1 WO2019173562 A1 WO 2019173562A1
Authority
WO
WIPO (PCT)
Prior art keywords
biometric
feature vectors
input
classification
dnn
Prior art date
Application number
PCT/US2019/021100
Other languages
English (en)
Inventor
Scott Edward STREIT
Original Assignee
Open Inference Holdings LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/914,436 external-priority patent/US10419221B1/en
Priority claimed from US15/914,942 external-priority patent/US10721070B2/en
Priority claimed from US15/914,969 external-priority patent/US11138333B2/en
Priority claimed from US15/914,562 external-priority patent/US11392802B2/en
Priority claimed from US16/218,139 external-priority patent/US11210375B2/en
Application filed by Open Inference Holdings LLC filed Critical Open Inference Holdings LLC
Priority to CA3092941A priority Critical patent/CA3092941A1/fr
Priority to EP19712657.6A priority patent/EP3762867A1/fr
Priority to AU2019230043A priority patent/AU2019230043A1/en
Publication of WO2019173562A1 publication Critical patent/WO2019173562A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/544Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices for evaluating functions by calculation
    • G06F7/5443Sum of products
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/254Fusion techniques of classification results, e.g. of results related to same input data
    • G06F18/256Fusion techniques of classification results, e.g. of results related to same input data of results relating to different input data, e.g. multimodal recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/02Comparing digital values
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/50Maintenance of biometric data or enrolment thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/32Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials
    • H04L9/3226Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system or for message authentication, e.g. authorization, entity authentication, data integrity or data verification, non-repudiation, key authentication or verification of credentials using a predetermined code, e.g. password, passphrase or PIN
    • H04L9/3231Biological data, e.g. fingerprint, voice or retina
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/259Fusion by voting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2207/00Indexing scheme relating to methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F2207/38Indexing scheme relating to groups G06F7/38 - G06F7/575
    • G06F2207/48Indexing scheme relating to groups G06F7/48 - G06F7/575
    • G06F2207/4802Special implementations
    • G06F2207/4818Threshold devices
    • G06F2207/4824Neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction

Definitions

  • Biometrics offer the opportunity for identity assurance and identity validation. Many conventional uses for biometrics currently exist for identity and validation. These conventional approaches suffer from many flaws. For example, the IPHONE facial recognition service limits implementation to a one to one match. This limitation is due to the inability to perform one to many searching on the biometric, let alone on a secure encrypted biometric. Other potential issues include faked biometric or replayed biometric signals that can be used to trick many conventional security systems.
  • Various embodiments of the privacy-enabled biometric system provide for scanning of multiple biometrics to determine matches or closeness. Further embodiments can provide for search and matching across multiple types of encrypted biometric information improving accuracy of validation over many conventional approaches, while improving the security over the same approaches.
  • coupling a liveness factor into identity assurance and validation resolves problems with conventional security, closing security holes that allow replay or faked biometric signals.
  • Further embodiments incorporate random liveness checks (e.g., with random biometric requests (e.g., voice identification coupled with identification of random words or syllables)) as part of a multi-factor authentication.
  • imaging and facial recognition is executed in conjunction with random liveness testing of a separate biometric (e.g., voice identification with random word requests) to complete authentication.
  • privacy enabled biometrics e.g., privacy enabled facial recognition and/or voice identification
  • an authentication system can test liveness and test biometric identity using fully encrypted reference biometrics.
  • the system is configured to execute comparisons directly on the encrypted biometrics (e.g., encrypted feature vectors of the biometric or encrypting embeddings derived from the unencrypted biometric) to determine authenticity with a learning neural network.
  • a first neural network is used to process unencrypted biometric inputs and generate Euclidean measurable encrypted feature vectors or encrypted embeddings.
  • the encrypted feature vectors are used to train a classification deep neural network.
  • Multiple learning networks e.g., deep neural networks
  • biometric input e.g., facial/feature biometrics, voice biometrics, health/biologic data biometrics, etc.
  • multiple biometric types can be processed into an authentication system to increase accuracy of identification.
  • a set of encrypted feature vectors or encrypted embeddings can be derived from any biometric data (e.g., using a first pre-trained neural network), and then using a deep neural network (“DNN”) on those one-way homomorphic encryptions (i.e., each biometrics’ feature vector or each biometrics embedding values) a system can determine matches or execute searches on the encrypted data.
  • DNN deep neural network
  • Each of the biometrics encrypted feature vectors/embeddings can then be stored and/or used in conjunction with respective classifications for use in subsequent comparisons without fear of compromising the original biometric data.
  • any unencrypted or original biometric data is discarded responsive to generating the encrypted values.
  • the homomorphic encryption enables computations and comparisons on cypher text without decryption. This improves security over conventional approaches. Searching biometrics in the clear on any system, represents a significant security vulnerability. In various examples described herein, only the one-way encrypted biometric data is available on a given device. Various embodiments restrict execution to occur on encrypted biometrics for any matching or searching.
  • an authentication system can also analyze an assurance factor while processing biometric input to ensure that the biometric input is generated by the individual seeking authentication (i.e., not pre-recorded or faked biometric signaling).
  • the authentication system is configured to request randomly selected instances (e.g., system random selection) of a biometric input (e.g., randomly selected words).
  • the system as part of one process can evaluate the received voice information to determine an identity match, while processing the received voice information to ensure that received voice information matches the randomly selected words.
  • the authentication system is able to validate that an identity match (e.g., neural network prediction of identity) was supplied at the time requested and by the entity trying to confirm their identity (i.e.
  • system and/or connected devices can collect biometric information of multiple types (e.g., facial features and voice, among other options) to increase accuracy of identity matching, which can be further augmented with liveness detection to prevent spoofing or fraud.
  • biometric information of multiple types e.g., facial features and voice, among other options
  • an authentication system for evaluating privacy-enabled biometrics and validating contemporaneous input of biometrics.
  • the system comprises at least one processor operatively connected to a memory; an interface, executed by the at least one processor configured to: receive a candidate set of instances of a first biometric data type input by a user requesting authentication; a classification component executed by the at least one processor, configured to: analyze a liveness threshold, wherein analyzing the liveness threshold includes processing the candidate set of instances to determine that the candidate set of instances matches a random set of instances; the classification component further comprising at least a first deep neural network (“DNN”), the classification component configured to: accept encrypted feature vectors (e.g., voice feature vectors, etc.), generated from a first neural network, the first neural network configured to process an unencrypted input of the first data type into the encrypted feature vectors; classify with the first DNN the encrypted feature vectors of the first biometric type during training, based on training the first DNN with encrypted feature vector and label inputs;
  • DNN
  • the classification component is configured to:
  • the system further comprises a liveness component, executed by the at least one processor, configured to generate a random set of instances of a first biometric type in response to an authentication request.
  • the system is configured to request a user provide the candidate set of instances of the first biometric data type based on the generated random set of instances.
  • the interface is configured to prompt user input of the randomly selected instances of the first biometric input to establish a threshold volume of biometric information confirmed at validation.
  • the classification component further comprises at least a second deep neural network (“DNN”) configured to: accept encrypted feature vectors (e.g., face feature vectors, etc.), generated from a second neural network, the second neural network configured to process an unencrypted input of the second data type into the encrypted feature vectors; return a label for person identification or an unknown result during prediction responsive to analyzing encrypted feature vectors; and wherein the classification component is configured to confirm identification based on matching the label for person identification from the first and second DNNs.
  • DNN deep neural network
  • the second DNN is configured to classify the encrypted feature vectors of the second biometric type during training, based on training the second DNN with encrypted feature vector and label inputs.
  • the system further comprises the first neural network configured to process an unencrypted input of the first data type into the encrypted feature vectors.
  • the system further comprises a pre-processing component configured to reduce a volume of unencrypted input biometric information for input into the first neural network.
  • the classification component is configured to incrementally update the first DNN with new person labels and new persons feature vectors, based on updating null or undefined elements defined in the first DNN at training, and maintaining the network architecture and accommodating the unknown result for subsequent predictions without requiring full retraining of the first DNN.
  • the system is configured to analyze the output values from the first DNN and based on positioning of the output values in an array and the values in those positions, determine the label or unknown.
  • a computer implemented method or evaluating privacy- enabled biometrics and validating contemporaneous input of biometrics comprises: receiving, by at least one processor, a candidate set of instances of a first biometric data type input by a user requesting authentication; analyzing, by the at least on processor, a liveness threshold, wherein analyzing the liveness threshold includes processing the candidate set of instances to determine that the candidate set of instances matches a random set of instances;
  • a first deep neural network (“DNN”) executed by the at least one processor, encrypted feature vectors (e.g., voice feature vectors, etc.), generated from a first neural network, the first neural network configured to process an unencrypted input of the first data type into the encrypted feature vectors; classifying, by the first DNN, the encrypted feature vectors of the first biometric type during training, based on training the first DNN with encrypted feature vector and label inputs; returning, by the first DNN, a label for person identification or an unknown result during prediction responsive to analyzing encrypted feature vectors; and confirming authentication based at least on the label and the liveness threshold.
  • DNN deep neural network
  • the method further comprises: determining for values above the liveness threshold that the input matches the random set of instances; and determining for values below the threshold that a current authentication request is invalid.
  • the method further comprises generating a random set of instances of a first biometric type in response to an authentication request.
  • the method further comprises requesting a user provide the candidate set of instances of the first biometric data type based on the generated random set of instances.
  • the method further comprises prompting user input of the randomly selected instances of the first biometric input to establish a threshold volume of biometric information confirmed at validation.
  • the method further comprises: accepting, by at least a second deep neural network, encrypted feature vectors (e.g., face feature vectors, etc.), generated from a second neural network, the second neural network configured to process an unencrypted input of the second data type into the encrypted feature vectors; returning, by the second DNN a label for person identification or an unknown result during prediction responsive to analyzing encrypted feature vectors; and confirming identification based on matching the label for person identification from the first and second DNNs.
  • encrypted feature vectors e.g., face feature vectors, etc.
  • the second DNN is configured to classify the encrypted feature vectors of the second biometric type during training, based on training the second DNN with encrypted feature vector and label inputs.
  • the method further comprises processing, by the first neural network, an unencrypted input of the first data type into the encrypted feature vectors.
  • the method further comprises incrementally updating the first DNN with new person labels and new persons feature vectors, based on updating null or undefined elements established in the first DNN at training, and maintaining the architecture of the first DNN and accommodating the unknown result for subsequent predictions without requiring full retraining of the first DNN.
  • an authentication system for evaluating privacy-enabled biometrics and contemporaneous input of biometrics for processing.
  • the system comprises at least one processor operatively connected to a memory, the at least one processor configured to generate in response to an authentication request, a random set of instances of a first biometric input of a first biometric data type (e.g., random words), an interface, executed by the at least one processor configured to: receive a candidate set of instances of a first biometric data type input by a user requesting authentication, for example, wherein the interface is configured to prompt a user to submit the first biometric input according to the randomly selected set of instances (e.g., display random words); a classification component executed by the at least one processor, configured to: analyze a liveness threshold; determine for values above the liveness threshold that the user is submitting the biometric information concurrent with or responsive to the authentication request; determine for values below the threshold that a current authentication request is unacceptable (e.g., invalid or incorrect, etc.), wherein analyzing the liveness threshold includes
  • the system further comprises a feature vector generation component comprising a pre-trained neural network configured to generate Euclidean measurable encrypted feature vectors as an output of a least one layer in the neural network responsive to input of an unencrypted biometric input.
  • a feature vector generation component comprising a pre-trained neural network configured to generate Euclidean measurable encrypted feature vectors as an output of a least one layer in the neural network responsive to input of an unencrypted biometric input.
  • an authentication system for evaluating privacy-enabled biometrics and liveness, the system comprising: at least one processor operatively connected to a memory; an interface configured to: accept a first biometric input associated with a first biometric data type (e.g., video or imaging); accept a second biometric input associated with a second biometric type, wherein the interface is configured to prompt a user to provide the second biometric input according to randomly selected instances of the second biometric input (e.g., the second biometric input providing voice and the randomly selected instances providing liveness); a classification component executed by the at least one processor, comprising at least a first and second deep neural network (“DNN”), the classification component configured to: accept encrypted feature vectors generated with a first classification neural network for processing a first type of an unencrypted biometric (e.g., pre-trained NN to classify the biometric input (e.g., FACENET, etc.)); accept encrypted feature vectors generated with a second classification neural network for processing a second type of
  • DNN
  • encrypted search can be executed on the system in polynomial time, even in a one to many use case. This feature enables scalability that conventional systems cannot perform and enables security/privacy unavailable in many conventional approaches.
  • a privacy-enabled biometric system comprising at least one processor operatively connected to a memory; a classification component executed by the at least one processor, comprising a classification network having a deep neural network (“DNN”) configured to classify feature vector inputs during training and return a label for person identification or an unknown result during prediction; and the classification component is further configured to accept as an input feature vectors that are Euclidean measurable and return the unknown result or the label as output.
  • a set of biometric feature vectors is used for training in the DNN neural network for subsequent prediction.
  • biometrics are morphed a finite number of times to create additional biometrics for training of the second (classification) neural network.
  • the second neural network is loaded with the label and a finite number of feature vectors based on an input biometric.
  • the classification component is configured to accept or extract from another neural network Euclidean measurable feature vectors.
  • the another neural network comprises a pre-trained neural network.
  • this network takes in a plaintext biometric and returns a Euclidean measureable feature vector that represents a one-way encrypted biometric.
  • the classification neural network comprises a classification based deep neural network configured for dynamic training with label and feature vector input pairs to training.
  • a feature vector is input for prediction.
  • the system further comprises a preprocessing component configured to validate plaintext biometric input.
  • the classification component is configured with a plurality of modes of execution, including an enrollment mode configured to accept, as input, a label and feature vectors on which to train the classification network for subsequent prediction.
  • the classification component is configured to predict a match, based on a feature vector as input, to an existing label or to return an unknown result.
  • the classification component is configured to incrementally update an existing model, maintaining the network architecture (e.g., weighting values, loss function values, etc.) and accommodating the unknown result for subsequent predictions. In various embodiments, incremental updating the existing model avoids re-training operations that conventional approaches require.
  • the system is configured to analyze the output values and based on their position and the values, determine the label or unknown.
  • the classification network further comprises an input layer for accepting feature vectors of a number of dimensions, the input layer having a number of classes at least equal to the number of dimensions of the feature vector input, first and a second hidden layers, and an output layer that generates an array of values.
  • the fully connected neural network further comprises an input layer for accepting feature vectors of a number of dimensions, the input layer having a number of nodes at least equal to the number of dimensions of the feature vector input, a first hidden layer of at least 500 dimensions, a second hidden layer of at least twice the number of input dimensions, and an output layer that generates an array of values - that based on their position in the array and the values at respective positions, determine the label or an unknown.
  • a set of biometric feature vectors is used for training the DNN neural network for subsequent prediction.
  • a computer implemented method for executing privacy- enabled biometric training comprises instantiating, by at least one processor, a classification component comprising classification network having a deep neural network (“DNN”) configured to classify feature vector inputs during training and return a label for person identification or an unknown result during prediction; accepting, by the classification component, as an input feature vectors that are Euclidean measurable and a label for training the classification network; and Euclidean measurable feature vectors for prediction functions with the classification network; and classifying, by a classification component executed on at least one processor, the feature vector inputs and the label during training.
  • DNN deep neural network
  • the method further comprises accepting or extracting, by the classification component, from another neural network the Euclidean measurable feature vectors.
  • the another neural network comprises a pre-trained neural network.
  • the classification neural network comprises a classification based deep neural network configured for dynamic training with label and feature vector input pairs.
  • the method further comprises an act of validating input biometrics used to generate a feature vector.
  • the method further comprises an act of triggering a respective one of a plurality of modes of operation, including an enrollment mode configured to accept a label and feature vectors for an individual.
  • the method further comprises an act of predicting a match to an existing label or returning an unknown result responsive to accepting a biometric feature vector as input.
  • method further comprises an act of updating the classification network with respective vectors for use in subsequent predictions.
  • the input for prediction may be used to re-train the individual.
  • the method further comprises an act of updating, incrementally, an existing node in the classification network and maintaining the network architecture to accommodate the feature vector for subsequent predictions.
  • the classification network further comprises an input layer for accepting feature vectors of a number of dimensions, the input layer having a number of nodes at least equal to the number of dimensions of the feature vector input, a first and second hidden layer and an output layer that generates an array of values.
  • a method comprises an instantiating, a classification component comprising a classification network having a deep neural network (“DNN”) configured to classify feature vector and label inputs during training and return a label for person identification or an unknown result during prediction; accepting, by the classification component, as an input feature vectors that are Euclidean measurable as an input and a label for training the classification network, and Euclidean measurable feature vectors for prediction functions with the classification network; and classifying, by a classification component executed on at least one processor, the feature vector inputs and the label during training.
  • DNN deep neural network
  • the method further comprises an act of accepting or extracting, by the classification component, from another neural network Euclidean measurable feature vectors.
  • the another neural network comprises a pretrained neural network.
  • the computer readable medium contains instructions to perform any of the method steps above, individually, in combination, or in any combination.
  • a privacy-enabled biometric system comprising a classification means comprising a classifying deep neural network (“DNN”) executed by at least one processor the FCNN configured to: classify feature vector inputs and return a label for person identification or an unknown result as a prediction; and accept as an input, feature vectors that are Euclidean measurable and a label as an instance of training.
  • DNN deep neural network
  • a privacy-enabled biometric system comprising at least one processor operatively connected to a memory; a classification component executed by the at least one processor, including a classification network having a deep neural network (“DNN”) configured to classify feature vector inputs during training and return a label for person identification or an unknown result during prediction, wherein the classification component is further configured to accept as an input feature vectors that are Euclidean measurable; a feature vector generation component comprising a pre-trained neural network configured to generate Euclidean measurable feature vectors as an output of a least one layer in the neural network responsive to input of an unencrypted biometric input.
  • DNN deep neural network
  • the classification component is further configured to accept one way homomorphic, Euclidean measurable vectors, and labels for person identification as input for training.
  • the classification component is configured to accept or extract from the pre-trained neural network the feature vectors.
  • the pre-trained neural network includes an output generation layer which provides Euclidean Measureable feature vectors.
  • the classification network comprises a deep neural network suitable for training and, for prediction, output of a list of values allowing the selection of labels or unknown as output.
  • the pre-trained network generates feature vectors on a first biometric type (e.g., image, voice, health data, iris, etc.); and the classification component is further configured to accept feature vectors from a another neural network that generates Euclidean measurable feature vectors on a another biometric type.
  • a first biometric type e.g., image, voice, health data, iris, etc.
  • the classification component is further configured to accept feature vectors from a another neural network that generates Euclidean measurable feature vectors on a another biometric type.
  • the system is configured to instantiate multiple classification networks each associated with at least one different biometric type relative to another classification network, and classify input feature vectors based on executing at least a first or second classification network.
  • the system is configured to execute a voting procedure to increase accuracy of identification based, for example, on multiple biometric inputs or multiple types of biometric input.
  • the system is configured to maintain at least an executing copy of the classifying network and an updatable copy of classification network that can be locked or put in an offline state to enable retraining operations while the executing copy of the classifying network handles any classification requests.
  • the classification component is configured with a plurality of modes of execution, including an enrollment mode configured to accept a label for identification and the input feature vectors for an individual from the feature vector generation component.
  • the classification component is configured to predict a match to an existing label or to return an unknown result based on feature vectors enrolled in the classification network.
  • the classification component is configured to incrementally update an existing node in the neural network maintaining the network architecture and accommodating the unknown result for subsequent predictions.
  • the classification network further comprises an input layer for accepting feature vectors of a number of dimensions, the input layer having a number of nodes at least equal to the number of dimensions of the feature vector input, a first hidden layer, a second hidden layer, and an output layer that generates hat generates an array of values that based on their position and the values, determine the label or unknown.
  • the classification network further comprises a plurality of layers including two hidden layers and an output layer having a number of nodes at least equal to the number of dimensions of the feature vector input.
  • a computer implemented method for executing privacy- enabled biometric analysis the method further comprises instantiating, by at least one processor, a classification component comprising a deep neural network (“DNN”) configured to classify feature vector inputs during training and return a label for person identification or an unknown result during prediction, and a feature vector generation component comprising a pre-trained neural network; generating, by the feature vector generation component Euclidean measurable feature vectors as an output of a least one layer in the pre-trained neural network responsive to input of an unencrypted biometric input; accepting, by the classification component, as an input feature vectors that are Euclidean measurable generated by the feature vector generation component and a label for training the classification network, and Euclidean measurable feature vectors for prediction functions with the classification network; and classifying, by a classification component executed on at least one processor, the feature vector inputs and the label during training.
  • DNN deep neural network
  • the method further comprises accepting or extracting, by the classification network the Euclidean measurable feature vectors from the pre-trained neural network.
  • the second neural network comprises a pretrained neural network.
  • the method further comprises an act of validating input feature vectors as Euclidean measurable.
  • the method further comprises generating, by the classification component feature vectors on a first biometric type (e.g., image, voice, health data, iris, etc.); and accepting, by the classification component, feature vectors from another neural network that generates Euclidean measurable feature vectors on a second biometric type.
  • a first biometric type e.g., image, voice, health data, iris, etc.
  • method further comprises: instantiating multiple classification networks each associated with at least one different biometric type relative to another classification network, and classifying input feature vectors based on applying at least a first or second classification network.
  • the method further comprises executing a voting procedure to increase accuracy of identification based on multiple biometric inputs or multiple types of biometric input and respective classifications.
  • a biometric for a biometric to be considered a match, it must receive a plurality of votes based on a plurality of biometrics.
  • the method further comprises instantiating multiple copies of the classification network to enable at least an executing copy of the classification network, and an updatable classification network that can be locked or put in an offline state to enable retraining operations while the executing copy of the classification network handles any classification requests.
  • the method further comprises predicting a match to an existing label or to return an unknown result based, at least in part, on feature vectors enrolled in the classification network.
  • the method further comprises updating, incrementally, an existing model in the classification network maintaining the network architecture and accommodating the unknown result for subsequent predictions.
  • a non-transitory computer readable medium containing instructions when executed by at least one processor cause a computer system to execute a method for executing privacy-enabled biometric analysis, the method is provided.
  • the method comprises instantiating a classification component comprising a deep neural network (“DNN”) configured to classify feature vector and label inputs during training and return a label for person identification or an unknown result during prediction, and a feature vector generation component comprising a pre-trained neural network; generating, by the feature vector generation component Euclidean measurable feature vectors as an output of a least one layer in the pre-trained neural network responsive to input of an unencrypted biometric input; accepting, by the classification component, as an input feature vectors that are Euclidean measurable generated by the feature vector generation component and a label for training the classification network, and Euclidean measurable feature vectors for prediction functions with the classification network; and classifying, by a classification component executed on at least one processor, the feature vector inputs and the label during training.
  • DNN deep neural
  • a privacy-enabled biometric system comprising a feature vector generation means comprising a pre-trained neural network configured to generate Euclidean measurable feature vectors responsive to an unencrypted biometric input; a classification means comprising a deep neural network (“DNN”) configured to: classify feature vector and label inputs and return a label for person identification or an unknown result for training; and accept feature vectors that are Euclidean measurable as inputs and return a label for person identification or an unknown result for prediction.
  • DNN deep neural network
  • a privacy-enabled biometric system comprising at least one processor operatively connected to a memory; a classification component executed by the at least one processor, including a classification network having a deep neural network (“DNN”) configured to classify feature vector and label inputs during training and return a label for person identification or an unknown result during prediction, wherein the classification component is further configured to accept as an input feature vectors that are Euclidean measurable; the classification network having an architecture comprising a plurality of layers: at least one layer comprising nodes associated with feature vectors, the at least one layer having an initial number of identification nodes and a subset of the identification nodes that are unassigned; the system responsive to input of biometric information for a new user is configured to trigger an incremental training operation for the classification network integrating the new biometric information into a respective one of the unallocated identification nodes usable for subsequent matching.
  • DNN deep neural network
  • the system is configured to monitor allocation of the unallocated identification nodes and trigger a full retraining of the classification network responsive to assignment of the subset of unallocated nodes.
  • the system is configured to execute a full retraining of the classification network to include additional unallocated identification nodes for subsequent incremental retraining of the DNN.
  • the system iteratively fully retrains the classification network upon depletion of unallocated identification nodes with additional unallocated nodes for subsequent incremental training.
  • the system is further configured to monitor matching of new biometric information to existing identification nodes in the classification network.
  • the system is further configured trigger integration of new biometric information into existing identification nodes responsive to exceeding a threshold associated with matching new biometric information.
  • the pre-trained network is further configured to generate one way homomorphic, Euclidean measurable, feature vectors for the individual.
  • the classification component is further configured to return a set of probabilities for matching a set of existing labels.
  • the classification component is further configured to predict an outcome based on a trained model, a set of inputs for the prediction and a result of a class or unknown (all returned values dictating UNKNOWN).
  • the classification component is further configured to accept the feature vector inputs from a neural network model that generates Euclidean measurable feature vectors. According to one embodiment, the classification component is further configured to extract the feature vectors from the neural network model from layers in the model. According to one embodiment, the system further comprising a feature vector component executed by the at least one processor comprising a neural network. According to one embodiment, the feature vector component is configured to extract the feature vectors during execution of the neural network from layers. According to one embodiment, the neural network comprises of a set of layers wherein one layer outputs Euclidean Measurable Feature Vectors.
  • the system further comprising a retraining component configured to monitor a number of new input feature vectors or matches of new biometric information to a label and trigger retraining by the classification component on the new biometric information for the label.
  • a retraining component configured to monitor a number of new input feature vectors or matches of new biometric information to a label and trigger retraining by the classification component on the new biometric information for the label. This can be additional training on a person, using predict biometrics, that continues training as a biometric changes over time.
  • the system may be configured to do this based on a certain number of consecutive predictions or may do it chronologically, say once every six months.
  • the classification component is configured to retrain the neural network on addition of new feature vectors.
  • the neural network is initially trained with unallocated people classifications, and the classification component is further configured to incrementally retrain the neural network to accommodate new people using the unallocated classifications.
  • the system further comprises a retraining component configured to: monitor a number of incremental retraining; trigger the classifier component to fully retrain the neural network responsive to allocation of the unallocated classifications.
  • the classification component is configured to fully retrain the neural network to incorporate unallocated people classifications, and incrementally retrain for new people using the unallocated classifications.
  • the classification component further comprises multiple neural networks for processing respective types of biometric information.
  • the classification component is further configured to generate an identity of a person responsive to at least two probable biometric indicators that may be used simultaneously or as part of a“voting” algorithm.
  • a computer implemented method for privacy-enabled biometric analysis comprises instantiating, by at least one processor, a classification component comprising a classification network having a deep neural network (“DNN”) configured to classify feature vector and label inputs during training and return a label for person identification or an unknown result during prediction, and wherein the classification component is further configured to accept as an input feature vectors that are Euclidean measurable and return the unknown result or the label as output; instantiating the classification component includes an act of allocating within at least one layer of the classification network, an initial number of classes and having a subset of the class slots that are unassigned; triggering responsive to input of biometric information for a new user incremental training operation for the classification network integrating the new biometric information into a respective one of the unallocated class slots usable for subsequent matching.
  • DNN deep neural network
  • the method further comprises acts of accepting, by the classification component, as an input feature vectors that are Euclidean measurable generated by a feature vector generation component; classifying, by the classification component executed on at least one processor, the feature vector inputs; and returning, by the classification component, a label for person identification or an unknown result.
  • the method further comprises acts of instantiating a feature vector generation component comprising a pre-trained neural network; and generating, by the feature vector generation component Euclidean measurable feature vectors as an output of a least one layer in the pre-trained neural network responsive to input of an unencrypted biometric input.
  • the method further comprises an act of monitoring, by the at least one processor, allocation of the unallocated identification classes and triggering an incremental retraining of the classification network responsive to assignment of the subset of unallocated nodes to provide additional unallocated classes.
  • the method further comprises an act of monitoring, by the at least one processor, allocation of the unallocated identification nodes and triggering a full retraining or incremental of the classification network responsive to assignment of the subset of unallocated nodes.
  • the method further comprises an act of executing a full retraining of the classification network to include additional unallocated classes for subsequent incremental retraining of the DNN.
  • the method further comprises an act of fully retraining the classification network iteratively upon depletion of unallocated identification nodes, the full retraining including an act of allocating additional unallocated nodes for subsequent incremental training.
  • the method further comprises an act of monitoring matching of new biometric information to existing identification nodes.
  • the method further comprises an act of triggering integration of new biometric information into existing identification nodes responsive to exceeding a threshold associated with matching new biometric information.
  • the method further comprises an act of generating one way homomorphic, Euclidean measurable, labels for person identification responsive to input of Euclidean measurable feature vectors for the individual by the classification component.
  • a non-transitory computer readable medium containing instructions when executed by at least one processor cause a computer system to execute a method instantiating a classification component comprising a classification network having a deep neural network (“DNN”) configured to classify feature vector and label inputs during training and return a label for person identification or an unknown result during prediction, and wherein the classification component is further configured to accept as an input feature vectors that are Euclidean measurable and return the unknown result or the label as output; instantiating the classification component includes an act of allocating within at least one layer of the classification network, an initial number of classes and having a subset of additional classes that are unassigned; triggering responsive to input of biometric information for a new user incremental training operation for the classification network integrating the new biometric information into a respective one of the unallocated identification nodes usable for subsequent matching.
  • the computer readable medium contains instructions to perform any of the method steps above, individually, in combination, or in any combination.
  • a privacy-enabled biometric system comprising at least one processor operatively connected to a memory; a classification component executed by the at least one processor, comprising classification network having a deep neural network configured to classify Euclidean measurable feature vectors and label inputs for person identification during training, and accept as an input feature vectors that are Euclidean measurable and return an unknown result or the label as output; and an enrollment interface configured to accept biometric information and trigger the classification component to integrate the biometric information into the classification network.
  • the enrollment interface is accessible via uri, and is configured to accept unencrypted biometric information and personally identifiable information (“PII”).
  • PII personally identifiable information
  • the enrollment interface is configured to link the PII to a one way homomorphic encryption of an unencrypted biometric input.
  • the enrollment interface is configured to trigger deletion of the unencrypted biometric information.
  • the system is further configured to enroll an individual for biometric authentication; and the classification component is further configured to accept input of Euclidean measurable feature vectors for person identification during prediction.
  • the classification component is further configured to return a set of probabilities for matching a feature vector.
  • the classification component is further configured to predict an outcome based on a trained model, a set of inputs for the prediction and a result of a class (persons) or UNKNOWN (all returned values dictating UNKNOWN).
  • the system further comprises an interface configured to accept a biometric input and return and indication of known or unknown to a requesting entity.
  • requesting entity includes any one or more of: an application, a mobile application, a local process, a remote process, a method, and a business object.
  • the classification component further comprising multiple classification networks for processing different types of biometric information.
  • the classification component is further configured to match an identity of a person responsive to at least two probable biometric indicators that may be used simultaneously or as part of a voting algorithm.
  • the classification network further comprises an input layer for accepting feature vectors of a number of dimensions, the input layer having a number of classes at least equal to the number of dimensions of the feature vector input, a first and second hidden layer, and an output layer that generates an array of values.
  • a computer implemented method for privacy-enabled biometric analysis comprises instantiating, by at least one processor, a classification component comprising a full deep neural network configured to classify feature vectors that are Euclidean measurable and a label inputs for person identification during training, and accept as an input feature vectors that are Euclidean measurable and return an unknown result or the label as output during prediction, and an enrollment interface; accepting, by the enrollment interface, biometric information associated with a new individual; triggering the classification component to train the classification network on feature vectors derived from the biometric information and a label for subsequent identification; and return the label through for subsequent identification.
  • an instantiating the enrollment interface included hosting a portal accessible via uri includes accepting biometric information and personally identifiable information (“RP”) through the portal.
  • the method further comprises linking the PII to a one way homomorphic encryption of an unencrypted biometric input.
  • the method further comprises triggering deletion of unencrypted biometric information on a submitting device.
  • method further comprises enrolling individuals for biometric authentication; and mapping labels and respective feature vectors for person identification, responsive to input of Euclidean measurable feature vectors and a label for the individual.
  • the method further comprises returning a set of probabilities for matching a set of existing labels.
  • the method further comprises predicting an outcome based on a trained model, a set of inputs for the prediction and a result of a class (e.g., persons) or unknown (e.g., all returned values dictating UNKNOWN).
  • the method further comprises accepting via an authentication interface a biometric input and returning and indication of known or unknown to a requesting entity.
  • the requesting entity includes any one or more of: an application, a mobile application, a local process, a remote process, a method, and a business object.
  • the method further comprises processing different types of biometric information using multiple classification networks.
  • the method further comprises generating an identity of a person responsive to at least two probable biometric indicators that may be used simultaneously or as part of a voting algorithm.
  • the classification network further comprises an input layer for accepting feature vectors of a number of dimensions, the input layer having a number of classes at least equal to the number of dimensions of the feature vector input, a second hidden layer of at least twice the number of input dimensions, and an output layer that generates an array of values.
  • the fully connected neural network further comprises an input layer for accepting feature vectors of a number of dimensions, the input layer having a number of nodes at least equal to the number of dimensions of the feature vector input, a first hidden layer of at least 500 dimensions, a second hidden layer of at least twice the number of input dimensions, and an output layer that generates an array of values that based on their position and the values, determine the label or unknown.
  • FIG. 1 is an example process flow for classifying biometric information, according to one embodiment
  • FIG. 2A is an example process flow for authentication with secured biometric data, according to one embodiment
  • FIG. 2B is an example process flow for one to many matching execution, according to one embodiment
  • FIG. 3 is a block diagram of an embodiment of a privacy-enabled biometric system, according to one embodiment
  • FIG. 4A-D are a diagrams of embodiments of a fully connected neural network for classification
  • FIG. 5A-D illustrate example processing steps and example outputs during identification, according to one embodiment
  • FIG. 6 is a block diagram of an embodiment of a special purpose computer system program to execute the processes and/or functions described herein;
  • FIG. 7 is a block diagram of an embodiment of a privacy-enabled biometric system with liveness validation, according to one embodiment
  • FIG. 8A-B is a table showing comparative considerations of example implementation, according to various embodiments.
  • FIG. 9 is an example process for determining identity and liveness, according to one embodiment.
  • FIG. 10 is an example process for determining identity and liveness, according to one embodiment.
  • the system is configured to provide one to many search and/or matching on encrypted biometrics in polynomial time.
  • the system takes input biometrics and transforms the input biometrics into feature vectors (e.g., a list of floating point numbers (e.g., 64, 128, 256, or within a range of at least 64 and 10240, although some embodiments can use more feature vectors)).
  • feature vectors e.g., a list of floating point numbers (e.g., 64, 128, 256, or within a range of at least 64 and 10240, although some embodiments can use more feature vectors).
  • the number of floating point numbers in each list depends on the machine learning model being employed to process input encrypted biometric information. For example, the known FACENET model by GOOGLE generates a feature vector list of 128 floating point numbers, but other embodiments use models with different feature vectors and, for example, lists of floating point numbers.
  • the biometrics processing model e.g., a deep learning convolution network (e.g., for images and/or faces)
  • each feature vector is Euclidean measurable when output.
  • the input (e.g., the biometric) to the model can be encrypted using a neural network to output a homomorphic encrypted value.
  • the inventors have created a first neural network for processing plain or unencrypted voice input.
  • the voice neural network is used to accept unencrypted voice input and to generate embeddings or feature vectors that are encrypted and Euclidean measurable for use in training another neural network.
  • the first voice neural network generates encrypted embeddings that are used to train a second neural network, that once trained can generate predictions on further voice input (e.g., match or unknown).
  • the second neural network e.g., a deep neural network - DNN
  • the feature vectors generated for voice can be a list of 64 floating point numbers, but similar ranges of floating points numbers to the FACENET implementations (discussed in greater detail below) can also be used (e.g., 32 floating point numbers up to 10240 floating point numbers, among other options).
  • the system by executing on embedding or feature vectors that are encrypted and Euclidean measureable the system produces and operates in a privacy preserving manner.
  • These encryptions e.g., one way homomorphic encryptions
  • can be used in encrypted operations e.g., addition, multiplication, comparison, etc.
  • the original or input biometric can simply be discarded, and does not represent a point of failure for security thereafter.
  • implementing one way encryptions eliminates the need for encryption keys that can likewise be compromised. This is a failing of many convention systems.
  • the privacy enabled by use of encrypted biometrics can be further augmented with liveness detection to prevent faked or spoofed biometric credentials from being used.
  • the system can analyze an assurance factor derived from randomly selected instances (e.g., selected by the system) of a biometric input, to determine that input biometric information matches the set of randomly selected instances of the biometric input.
  • the assurance factor and respective execution can be referred to as a “liveness” test.
  • the authentication system can validate the input of biometric information for identity and provide assurance the biometric information was not faked via liveness testing.
  • Fig. 7 is a block diagram of an example privacy-enabled biometric system 704 with liveness validation.
  • the system can be installed on a mobile device or called from a mobile device (e.g., on a remote server or cloud based resource) to return an authenticated or not signal.
  • system 704 can execute any of the following processes. For example, system 704 can enroll users (e.g., via process 100), identify enrolled users (e.g., process 200), and search for matches to users (e.g., process 250).
  • system 704 includes multiple pairs of neural networks, where each pair includes a processing neural network for accepting an unencrypted biometric input (e.g., images or voice, etc.) and generating an encrypted embedding or feature vector.
  • Each pair can include a classification neural network than can be trained on the encrypted feature vectors to classify the encrypted information with labels, and that is further used to predict a match to the trained labels or an unknown class based on subsequent input of encrypted feature vectors.
  • system 704 can accept, create or receive original biometric information (e.g., input 702).
  • the input 702 can include images of people, images of faces, thumbprint scans, voice recordings, sensor data, etc.
  • the voice inputs can be requested by the system, and correspond to a set of randomly selected biometric instances (including for example, randomly selected words).
  • the inputs can be processed for identity matching and in conjunction the inputs can be analyzed to determine matching to the randomly selected biometric instances for liveness verification.
  • the system 704 can also be architected to provide a prediction on input of an encrypted feature vector, and another system or component can accept unencrypted biometrics and/or generate encrypted feature vectors, and communicate the same for processing.
  • the system can include a biometric processing component 708.
  • a biometric processing component e.g., 708 can be configured to crop received images, sample voice biometrics, eliminate noise from microphone captures, etc., to focus the biometric information on distinguishable features (e.g., automatically crop image around face, eliminate background noise for voice sample, etc.).
  • Various forms of preprocessing can be executed on the received biometrics, and the pre-processing can be executed to limit the biometric information to important features or to improve identification by eliminating noise, reducing an analyzed area, etc.
  • the pre-processing e.g., via 708) is not executed or not available. In other embodiments, only biometrics that meet quality standards are passed on for further processing.
  • Processed biometrics can be used to generate additional training data, for example, to enroll a new user, and/or train a classification component/network to perform predictions.
  • the system 704 can include a training generation component 710, configured to generate new biometrics for use in training to identify a user.
  • the training generation component 710 can be configured to create new images of the user’s face or voice having different lighting, different capture angles, etc., different samples, filtered noise, introduced noise, etc., in order to build a larger training set of biometrics.
  • the system includes a training threshold specifying how many training samples to generate from a given or received biometric.
  • system and/or training generation component 710 is configured to build twenty five additional images from a picture of a user’s face. Other numbers of training images, or voice samples, etc., can be used.
  • additional voice samples can be generated from an initial set of biometric inputs to create a larger set of training samples for training a voice network (e.g., via 710)
  • the system is configured to generate encrypted feature vectors from the biometric input (e.g., process images from input and generated training images, process voice inputs and/or voice samples, among other options).
  • the system 704 can include an embedding component 712 configured to generate encrypted embeddings or encrypted feature vectors (e.g., image feature vectors, voice feature vectors, health data feature vectors, etc.).
  • component 712 executes a convolution neural network (“CNN”) to process image inputs (and for example, facial images), where the CNN includes a layer which generates Euclidean measurable output.
  • the embedding component 712 can include multiple neural networks each tailored to specific biometric inputs, and configured to generate encrypted feature vectors (e.g., for captured images, for voice inputs, for health measurements or monitoring, etc.).
  • the system can be configured to required biometric inputs of various types, and pass the type of input to respective neural networks for processing to capture respective encrypted feature vectors, among other options.
  • one or more processing neural networks is instantiated as part of the embedding component 712, and the respective neural network process unencrypted biometric inputs to generate encrypted feature vectors.
  • the processing neural network is a convolutional neural network constructed to create encrypted embeddings from unencrypted biometric input.
  • encrypted feature vectors can be extracted from a neural network at the layers preceding a softmax layer (including for example, the n-1 layer).
  • various neural networks can be used to define embeddings or features vectors with each tailored to an analyzed biometric (e.g., voice, image, health data, etc.), where an output of or with the model is Euclidean measurable.
  • Some examples of these neural network include a model having a softmax layer. Other embodiments use a model that does not include a softmax layer to generate Euclidean measurable feature vectors.
  • Various embodiments of the system and/or embedding component are configured to generate and capture encrypted feature vectors for the processed biometrics in the layer or layer preceding the softmax layer.
  • the system 704 can include a classifier component 714.
  • the classifier component can include one or more deep neural networks trained on encrypted feature vector and label inputs for respective users and their biometric inputs.
  • the trained neural network can then be used during prediction operations to return a match to a person (e.g., from among a groups of labels and people (one to many matching) or from a singular person(one to one matching)) or to return a match to an unknown class.
  • the feature vectors from the embedding component 712 or system 704 are used by the classifier component 714 to bind a user to a classification (i.e., mapping biometrics to a matchable /searchable identity).
  • a deep learning neural network e.g., enrollment and prediction network
  • FCNN fully connected neural network
  • the FCNN generates an output identifying a person or indicating an UNKNOWN individual (e.g., at 706).
  • Other examples can implement different neural networks for classification and return a match or unknown class accordingly.
  • the classifier is a neural network but does not require a fully connected neural network.
  • a deep learning neural network (e.g., which can be an FCNN) must differentiate between known persons and the UNKNOWN.
  • the deep learning neural network can include a sigmoid function in the last layer that outputs probability of class matching based on newly input biometrics or that outputs values showing failure to match.
  • Other examples achieve matching based on executing a hinge loss function to establish a match to a label/person or an unknown class.
  • system 704 and/or classifier component 714 are configured to generate a probability to establish when a sufficiently close match is found.
  • an unknown person is determined based on negative return values (e.g., the model is tuned to return negative values for no match found).
  • multiple matches can be developed by the classifier component 714 and voting can also be used to increase accuracy in matching.
  • Various implementations of the system e.g., 704) have the capacity to use this approach for more than one set of input.
  • the approach itself is biometric agnostic.
  • Various embodiments employ encrypted feature vectors that are Euclidean measurable, generation of which is handled using the first neural network or a respective first network tailored to a particular biometric.
  • different neural networks are instantiated to process different types of biometrics.
  • the vector generating neural network may be swapped for or use a different neural network in conjunction with others where each is capable of creating a Euclidean measurable encrypted feature vector based on the respective biometric.
  • the system may enroll in both or greater multiple biometric types (e.g., use two or more vector generating networks) and predict on the features vectors generated for both types of biometrics using both neural networks for processing respective biometric types, which can also be done simultaneously.
  • feature vectors from each type of biometric can likewise be processed in respective deep learning networks configured to predict matches based on the feature vector inputs (or return unknown).
  • the cogenerated results (e.g., one from each biometric type) may be used to identify a user using a voting scheme and may better perform by executing multiple predictions simultaneously.
  • identification is coupled with liveness testing to ensure that biometric inputs are not, for example, being recorded and replayed for verification.
  • the system 704 can include a liveness component 718.
  • the liveness component can be configured to generate a random set of biometric instances, that the system requests a user submit.
  • the random set of biometric instance can serve multiple purposes.
  • the biometric instances provide a biometric input that can be used for identification, and can also be used for liveness (e.g., validate matching to random selected instances).
  • the system can provide an authentication indication or provide access or execution of a requested function. Further embodiments can require multiple types of biometric input for identification, and couple identification with liveness validation. In yet other embodiments, liveness testing can span multiple biometric inputs as well.
  • the liveness component 718 is configured to generate a random set of words that provide a threshold period of voice data from a user requesting authentication.
  • the system is configured to require a five second voice signal for processing, and the system can be configured to select the random biometric instances accordingly.
  • Other thresholds can be used (e.g., one, two, three, four, six, seven, eight, nine seconds or fractions thereof, among other examples), each having respective random selections that are associated with a threshold period of input.
  • the system can be configured to incorporate new identification classes responsive to receiving new biometric information.
  • the system 704 includes a retraining component configured to monitor a number of new biometrics (e.g., per user/identification class or by total number of new biometrics) and automatically trigger a re-enrollment with the new feature vectors derived from the new biometric information (e.g., produced by 712).
  • the system can be configured to trigger re-enrollment on new feature vectors based on time or time period elapsing.
  • the system 704 and/or retraining component 716 can be configured to store feature vectors as they are processed, and retain those feature vectors for retraining (including for example feature vectors that are unknown to retrain an unknown class in some examples).
  • Various embodiments of the system are configured to incrementally retrain the classification model (e.g., classifier component 714 and/or a DNN) on system assigned numbers of newly received biometrics. Further, once a system set number of incremental re-trainings have occurred the system is further configured to complete a full retrain of the model.
  • the incremental retrain execution avoids the conventional approach of fully retraining an neural network to recognize new classes and generate new identifications and/or to incorporate new feature vectors as they are input. Incremental retraining of an existing model to include a new identification without requiring a full retraining provides significant execution efficiency benefits over conventional approaches.
  • the variables for incremental retraining and full retraining can be set on the system via an administrative function. Some defaults include incremental retrain every 3, 4, 5, 6, etc., identifications, and full retrain every 3, 4, 5, 6, 7, 8, 9, 10, etc., incremental retrains. Additionally, this requirement may be met by using calendar time, such as retraining once a year. These operations can be performed on offline (e.g., locked) copies of the model, and once complete, the offline copy can be made live.
  • the system 704 and/or retraining component 716 is configured to update the existing classification model with new users/identification classes.
  • the system builds a classification model for an initial number of users, which can be based on an expected initial enrollment.
  • the model is generated with empty or unallocated spaces to accommodate new users. For example, a fifty user base is generated as a one hundred user model. This over allocation in the model enables incremental training to be executed and incorporated, for example, new classes without requiring fully retraining the classification model.
  • the system is and/or retraining component 716 is configured to incrementally retrain the classification model - ultimately saving significant computation time over convention retraining executions.
  • a full retrain with an additional over allocation can be made (e.g., fully retrain the 100 classes to a model with 150 classes).
  • an incremental retrain process can be executed to add additional unallocated slots.
  • the system can be configured to operate with multiple copies of the classification model.
  • One copy may be live that is used for authentication or identification.
  • a second copy may be an update version, that is taken offline (e.g., locked from access) to accomplish retraining while permitting identification operations to continue with a live model.
  • the updated model can be made live and the other model locked and updated as well. Multiple instances of both live and locked models can be used to increase concurrency.
  • system 700 can receive feature vectors instead of original biometrics and processing original biometrics can occur on different systems - in these cases system 700 may not include, for example, 708, 710, 712, and instead receive feature vectors from other systems, components or processes.
  • an authentication system in establishing identity and authentication is configured to determine if the source presenting the features is, in fact, a live source.
  • a live source In conventional password systems, there is no check for liveliness.
  • a typical example of a conventional approach includes a browser where the user fills in the fields for username and password or saved information is pre-filled in a form on behalf of the user. The browser is not a live feature, rather the entry of the password is pulled from the browser’ form history and essentially replayed. This is an example of replay, and according to another aspect presents many challenges exist where biometric input could be copied and replayed.
  • biometrics have the potential to increase security and convenience simultaneously.
  • issues associated with such implementation including for example, liveness.
  • Some conventional approaches have attempted to introduce biometrics - applying the browser example above, an approach can replace authentication information with an image of a person’s face or a video of the face.
  • these conventional systems may be compromised by using a stored image of the face or stored video and replaying for authentication.
  • biometrics e.g., such as face, voice or fingerprint, etc.
  • use of biometrics include the consequence of the biometric potentially being offered in non-live forms, and thus allowing a replayed biometric to be an offering of a plausible to the system. Without liveness, the plausible will likely be accepted.
  • the inventors have further realized that to determine if a biometric is live is an increasingly difficult problem. Examined are some approaches for resolving the liveness problem - which are treated broadly as two classes of liveness approaches (e.g., liveness may be subdivided into active liveness and passive liveness problem domains). Active liveness requires the user to do something to prove the biometric is not a replica.
  • Table X (Fig. 8A-B) illustrates example implementation that may be employed, and includes analysis of potential issues for various interactions of the example approaches.
  • various ones of the examples in Table X can be combined to reduce inefficiencies (e.g., potential vulnerabilities) in the implementation.
  • the implementation can be used, for example, where the potential for the identified replay attacks can be minimized or reduced.
  • randomly requested biometric instances in conjunction with identity validation on the same random biometric instances provides a high level of assurance of both identity and liveness.
  • the random biometric instances include a set of random words selected for liveness validation in conjunction with voice based identification.
  • an authentication system assesses liveness by asking the user to read a few random words. This can be done in various embodiments, via execution of process 900, Fig. 9.
  • process 900 can being at 902 with a request to a user to supply a set of random biometric instances.
  • Process 900 continues with concurrent (or, for example, simultaneous) authentication functions - identity and liveness at 904.
  • an authentication system can concurrently or simultaneously process the received voice signal through two algorithms (e.g., liveness algorithm and identity algorithm (e.g., by executing 904 of process 900), returning a result in less than one second.
  • the first algorithm e.g., liveness
  • the second algorithm uses a prediction function (e.g., a prediction application programming interface (API)) to perform a one-to-many (l:N) identification on a private voice biometric to ensure that the input correctly identifies the expected person.
  • a prediction function e.g., a prediction application programming interface (API)
  • l:N one-to-many identification on a private voice biometric to ensure that the input correctly identifies the expected person.
  • process 900 can return an authentication value for identified and live inputs 906 YES. If either check fails 906 NO, process 900 can return an invalid indicator at 910.
  • a first factor face (e.g., image capture)
  • the second factor voice (e.g., via random set of words)
  • face e.g., image capture
  • voice e.g., via random set of words
  • Various embodiments of private biometric systems are configured to execute liveness.
  • the system generates random text that is selected to take roughly 5 seconds to speak (in whatever language the user prefers - and with other example threshold minimum periods).
  • the user reads the text and the system (e.g., implemented as a private biometrics cloud service or component) then performs a text to speech process, comparing the pronounced text to the requested text.
  • the system allows, for example, a private biometric component to assert the liveness of the requestor for authentication.
  • the system compares the random text voice input and performs an identity assertion on the same input to ensure the voice that spoke the random words matches the user’s identity. For example, input audio is now used for liveness and identity.
  • Fig. 10 is an example process flow 1000 for executing identification and liveness validation.
  • Process 1000 can be executed by an authentication system (e.g., 704, Fig. 7 or 304, Fig. 3).
  • process 1000 begins with generation of a set of random biometric instances (e.g., set of random words) and triggering a request for the set of random words at 1002.
  • process 1000 continues under multiple threads of operation.
  • a first biometric type can be used for a first identification of a user in a first thread (e.g., based on images captured of a user during input of the random words).
  • Identification of the first biometric input can proceed as discussed herein (e.g., process unencrypted biometric input with a first neural network to output encrypted feature vectors, predict a match on the encrypted feature vectors with a DNN, and return an identification or unknown), and as described in, for example, process 200 and/or process 250 below.
  • an identity corresponding to the first biometric or an unknown class is returned.
  • a second biometric type can be used for a second identification of a user in a second thread.
  • the second identification can be based upon a voice biometric.
  • processing of a voice biometric can continue at 1008 with capture of at least a threshold amount of the biometric (e.g., 5 second of voice).
  • the amount of voice data used for identification can be reduced at 1010 with biometric pre-processing.
  • voice data can be reduced with execution of pulse code modulation.
  • Various approaches for processing voice data can be applied, including pulse code modulation, amplitude modulation, etc., to convert input voice to a common format for processing.
  • Some example functions that can be applied include Librosa (e.g., to eliminate background sound, normalize amplitude, etc.); pydub (e.g., to convert between mp3 and .wav formats); Librosa (e.g., for phase shift function); Scipy (e.g. to increase low frequency); Librosa (e.g., for pulse code modulation); and/or soundfile (e.g., for read and write sound file operations).
  • Librosa e.g., to eliminate background sound, normalize amplitude, etc.
  • pydub e.g., to convert between mp3 and .wav formats
  • Librosa e.g., for phase shift function
  • Scipy e.g. to increase low frequency
  • Librosa e.g., for pulse code modulation
  • soundfile e.g., for read and write sound file operations.
  • processed voice data is converted to the frequency domain via a fourier transform (e.g., fast fourier transform, discrete fourier transform, etc.) which can be provided by numpy or scipy libraries.
  • a fourier transform e.g., fast fourier transform, discrete fourier transform, etc.
  • numpy or scipy libraries can be provided by numpy or scipy libraries.
  • the two dimensional frequency array can be used to generate encrypted feature vectors.
  • voice data is input to a pre-trained neural network to generate encrypted voice feature vectors at 1012.
  • the frequency arrays are used as input to a pre-trained convolutional neural network (“CNN”) which outputs encrypted voice feature vectors.
  • CNN convolutional neural network
  • different pre-trained neural networks can be used to output encrypted voice feature vectors from unencrypted voice input.
  • the function of the pre-trained neural network is to output Euclidean measureable encrypted feature vectors upon voice data input.
  • a CNN is constructed with the goal of creating embeddings and not for its conventional purposed of classifying inputs.
  • the CNN can employ a triple loss function (including, for example, a hard triple loss function), which enables the CNN to converge more quickly and accurately during training than some other implementations.
  • the CNN is trained on hundreds or thousands of voice inputs. Once trained, the CNN is configured for creation of embeddings (e.g., encrypted feature vectors).
  • the CNN accepts a two dimensional array of frequencies as an input and provides floating point numbers (e.g., 32, 64, 128, 256, 1028, ... floating point numbers) as output.
  • the initial voice capture and processing (e.g., request for random words -1002 - 1012) can be executed on a user device (e.g., a mobile phone) and the resulting encrypted voice feature vector can be communicated to a remote service via an authentication API hosted and executed on cloud resources.
  • the initial processing and prediction operations can be executed on the user device as well.
  • Various execution architectures can be provided, including fully local authentication, fully remote authentication, and hybridization of both options.
  • process 1000 continues with communication of the voice feature vectors to a cloud service (e.g., authentication API) at 1014.
  • the voice features vectors can then be processed by a fully connected neural network (“FCNN”) for predicting a match to a trained label at 1016.
  • the input to the FCNN is an embedding generated by a first pre-trained neural network (e.g., an embedding comprising 32, 64, 128, 256, 1028, etc. floating point numbers).
  • a threshold number of people for identification e.g., 500, 750, 1000, 1250, 1500 ... etc.
  • the initial training can be referred to as“priming” the FCNN.
  • the priming function is executed to improve accuracy of prediction operations performed by the FCNN.
  • the FCNN returns a result matching a label or an unknown class - i.e., matches to an identity from among a group of candidates or does not match to a known identity.
  • the result is communicated for evaluation of each threads’ result at 1022.
  • the third thread of operation is executed to determine that the input biometrics used for identification are live (i.e., not spoofed, recorded, or replayed). For example, at 1020 the voice input is processed to determine if the input words matches the set of random words requested. In one embodiment, a speech recognition function is executed to determine the words input, and matching is executed against the randomly requested words to determine an accuracy of the match. If any unencrypted voice input remains in memory, the unencrypted voice data can be deleted as part of 1020. In various embodiments, processing of the third thread, can be executed locally on a device requesting authorization, on a remote server, a cloud resource, or any combination.
  • a recording of the voice input can be communicated to a server or cloud resource as part of 1020, and the accuracy of the match (e.g., input to random words) determined remotely. Any unencrypted voice data can be deleted once encrypted feature vectors are generated and/or once matching accuracy is determined.
  • the results of each thread is joined to yield an authorization or invalidation.
  • the first thread returns an identity or unknown for the first biometric
  • the second thread returns an identity or unknown for the second biometric
  • the third thread an accuracy of match between a random set of biometric instances and input biometric instances.
  • process 1000 provides a positive authentication indication wherein first thread identity matches the second thread identity and one of the biometric inputs is determined to be live (e.g., above a threshold accuracy (e.g., 33% or greater among other options). If not positive, process 1000 can be re-executed (e.g., a threshold number of times) or a denial can be communicated.
  • a threshold accuracy e.g. 33% or greater among other options.
  • process 1000 can include concurrent and/or simultaneous execution of the authentication threads to return a positive authentication or a denial.
  • process 1000 can be reduced to a single biometric type such that one identification thread and one liveness thread is executed to return a positive authentication or a denial.
  • the various steps described can be executed together or in different order, and may invoke other processes (e.g., to generate encrypted feature vectors to process for prediction) as part of determining identity and liveness of biometric input.
  • additional biometric types can be tested to confirm identity, with at least one liveness test on one of the biometric inputs to provide assurance that submitted biometrics are not replayed or spoofed.
  • multiple biometrics types can be used for identity and multiple biometric types can be used for liveness validation.
  • an authentication system interacts with any application or system needing authentication service (e.g., a Private Biometrics Web Service).
  • the system uses private voice biometrics to identify individuals in a datastore (and provides one to many (1:N) identification) using any language in one second.
  • Various neural networks measure the signals inside of a voice sample with high accuracy and thus allow private biometrics to replace“username” (or other authentication schemes) and become the primary authentication vehicle.
  • the system employs face (e.g., images of the user’s face) as the first biometric and voice as the second biometric type, providing for at least two factor authentication (“2FA”).
  • face e.g., images of the user’s face
  • voice as the second biometric type
  • the system employs voice for identity and liveness as the voice biometric can be captured with the capture of a face biometric. Similar biometric pairings can be executed to provide a first biometric identification, a second biometric identification for confirmation, coupled with a liveness validation.
  • an individual wishing to authenticate is asked to read a few words while looking into a camera and the system is configured to collect the face biometric and voice biometric while the user is speaking.
  • the same audio that created the voice biometric is used (along with the text the user was requested to read) to check liveness and to ensure the identity of the user’s voice matches the face.
  • Such authentication can be configured to augment security in a wide range of environments.
  • private biometrics e.g., voice, face, health measurements, etc.
  • common identity applications e.g.,“who is on the phone?”
  • single factor authentication (1FA) e.g.,“who is on the phone?”
  • call centers e.g., phone, watch and TV apps
  • physical security devices e.g., door locks
  • additional biometrics can be captured 2FA or better can provide greater assurance of identity with the liveness validation.
  • biometrics including, for example, face and voice biometrics
  • biometrics including, for example, face and voice biometrics
  • the system After collecting an unencrypted biometric (e.g., voice biometric), the system creates a private biometric (e.g., encrypted feature vectors) and then discards the original unencrypted biometric template.
  • these private biometrics enable an authentication system and/or process to identify a person (i.e., authenticate a person) while still guaranteeing individual privacy and fundamental human rights by only operating on biometric data in the encrypted space.
  • various embodiments are configured to pre-process the voice signal and reduce it to the voice data a smaller form (e.g., for example, without any loss).
  • the Nyquist sampling rate for this example is two times the frequency of the signal.
  • the system is configured to sample the resulting data and use this sample as input to a Fourier transform.
  • the resulting frequencies are used as input to a pre-trained voice neural network capable of returning a set of embeddings (e.g., encrypted voice feature vectors). These embeddings, for example, sixty four floating point numbers, provide the system with private biometrics which then serve as input to a second neural network for classification.
  • Fig. 1 is an example process flow 100 for enrolling in a privacy-enabled biometric system (e.g., Fig. 3, 304 described in greater detail below or Fig. 7, 704 above).
  • Process 100 begins with acquisition of unencrypted biometric data at 102.
  • the unencrypted biometric data e.g., plaintext, reference biometric, etc.
  • a user takes a photo of themselves on their mobile device for enrollment.
  • Preprocessing steps can be executed on the biometric information at 104.
  • pre-processing can include cropping the image to significant portions (e.g., around the face or facial features).
  • the end user can be provided a user interface that displays a reference area, and the user is instructed to position their face from an existing image into the designated area.
  • the identified area can direct the user to focus on their face so that it appears within the highlight area.
  • the system can analyze other types of images to identify areas of interest (e.g., iris scans, hand images, fingerprint, etc.) and crop images accordingly.
  • samples of voice recordings can be used to select data of the highest quality (e.g., lowest background noise), or can be processed to eliminate interference from the acquired biometric (e.g., filter out background noise).
  • a number of additional images can be generated from an acquired facial image.
  • an additional twenty five images are created to form a training set of images.
  • as few as three images can be used but with the tradeoff of reduce accuracy.
  • as many as forty training images may be created.
  • the training set is used to provide for variation of the initial biometric information, and the specific number of additional training points can be tailored to a desired accuracy (see e.g., Tables I- VIII below provide example implementation and test results).
  • Various ranges of training set production can be used in different embodiments (e.g., any set of images from two to one thousand).
  • the training group can include images of different lighting, capture angle, positioning, etc.
  • biometric information includes Initial Biometric Values (IBV) a set of plaintext values (pictures, voice, SSNO, driver’s license number, etc.) that together define a person.
  • IBV Initial Biometric Values
  • feature vectors are generated from the initial biometric information (e.g., one or more plain text values that identify an individual). Feature vectors are generated based on all available biometric information which can include a set of and training biometrics generated from the initial unencrypted biometric information received on an individual or individuals.
  • the IBV is used in enrollment and for example in process 100.
  • the set of IBVs are processed into a set of initial biometric vectors (e.g., feature vectors) which are used downstream in a subsequent neural network.
  • users are directed to a website to input multiple data points for biometric information (e.g., multiple pictures including facial images) in conjunction with personally identifiable information (“PII”).
  • biometric information e.g., multiple pictures including facial images
  • PII personally identifiable information
  • the system and/or execution of process 100 can include tying the PII to encryptions of the biometric as discussed below.
  • a convolutional deep neural network is executed to process the unencrypted biometric information and transform it into feature vector which has a property of being one-way encrypted cipher text.
  • the neural network is applied (108) to compute a oneway homomorphic encryption of the biometric - resulting in feature vectors (e.g., at 110). These outputs can be computed from an original biometric using the neural network but the values are one way in that the neural network cannot then be used to regenerate the original biometrics from the outputs.
  • Various embodiments take as input a neural network capable of taking plaintext input and returning Euclidean measurable output.
  • One such implementation is FaceNet which takes in any image of a face and returns 128 floating point numbers, as the feature vector.
  • the neural network is fairly open ended, where various implementations are configured to return a Euclidean measureable feature vector that maps to the input. This feature vector is nearly impossible to use recreate the original input biometric and is therefore considered a one-way encryption.
  • Various embodiments are configured to accept the feature vector(s) produced by a first neural network and use it as input to a new neural network (e.g., a second classifying neural network).
  • the new neural network has additional properties.
  • This neural network is specially configured to enable incremental training (e.g., on new users and/or new feature vectors) and configured to distinguish between a known person and an unknown person.
  • a fully connected neural network with 2 hidden layers and a“hinge” loss function is used to process input feature vectors and return a known person identifier (e.g., person label or class) or indicate that the processed biometric feature vectors are not mapped to a known person.
  • the hinge loss function outputs one or more negative values if the feature vector is unknown.
  • the output of the second neural network is an array of values, wherein the values and their positions in the array determined a match to a person.
  • the feature vector capture is accomplished via a pre-trained neural network (including, for example, a convolutional neural network) where the output is Euclidean measurable.
  • this can include models having a softmax layer as part of the model, and capture of feature vectors can occur preceding such layers.
  • Feature vectors can be extracted from the pre-trained neural network by capturing results from the layers that are Euclidean measurable.
  • the softmax layer or categorical distribution layer is the final layer of the model, and feature vectors can be extracted from the n-l layer (e.g., the immediately preceding layer).
  • the feature vectors can be extracted from the model in layers preceding the last layer. Some implementations may offer the feature vector as the last layer.
  • the resulting feature vectors are bound to a specific user classification at 112.
  • deep learning is executed at 112 on the feature vectors based on a fully connected neural network (e.g., a second neural network).
  • the execution is run against all the biometric data (i.e., feature vectors from the initial biometric and training biometric data) to create the classification information.
  • a fully connected neural network having two hidden layers is employed for classification of the biometric data.
  • a fully connected network with no hidden layers can be used for the classification.
  • the use of the fully connected network with two hidden generated better accuracy in classification see e.g., Tables I- VIII described in greater detail below).
  • process 100 can be executed to receive an original biometric (e.g., at 102) generate feature vectors (e.g., 110), and apply a FCNN classifier to generate a label to identify a person at 112 (e.g., output #people).
  • an original biometric e.g., at 102
  • feature vectors e.g., 110
  • FCNN classifier e.g., to generate a label to identify a person at 112 (e.g., output #people).
  • Process 100 continues with discarding any unencrypted biometric data at 114.
  • an application on the user’s phone is configured to enable enrollment of captured biometric information and configured to delete the original biometric information once processed (e.g., at 114).
  • a server system can process received biometric information and delete the original biometric information once processed. According to some aspects, only requiring that original biometric information exists for a short period during processing or enrollment significantly improves the security of the system over conventional approaches. For example, systems that persistently store or employ original biometric data become a source of vulnerability. Unlike a password that can be reset, a compromised biometric remains compromised, virtually forever.
  • the resulting cipher text (e.g., feature vectors) biometric is stored.
  • the encrypted biometric can be stored locally on a user device.
  • the generated encrypted biometric can be stored on a server, in the cloud, a dedicated data store, or any combination thereof.
  • the biometrics and classification is stored for use in subsequent matching or searching. For instance, new biometric information can be processed to determine if the new biometric information matches any classifications. The match (depending on a probability threshold) can then be used for authentication or validation.
  • the neural network model employed at 112 can be optimized for one to one matching.
  • the neural network can be trained on the individual expected to use a mobile phone (assuming no other authorized individuals for the device).
  • the neural network model can include training allocation to accommodate incremental training of the model on acquired feature vectors over time.
  • Various embodiment, discussed in great detail below incorporate incremental training operations for the neural network to permit additional people and to incorporate newly acquired feature vectors.
  • an optimized neural network model (e.g., FCNN) can be used for a primary user of a device, for example, stored locally, and remote authentication can use a data store and one to many models (e.g., if the first model returns unknown). Other embodiments may provide the one to many models locally as well.
  • the authentication scenario e.g., primary user or not
  • the system can be used by the system to dynamically select a neural network model for matching, and thereby provide additional options for processing efficiency.
  • Fig. 2A illustrates an example process 200 for authentication with secured biometric data.
  • Process 200 begins with acquisition of multiple unencrypted biometrics for analysis at 202.
  • the privacy-enabled biometric system is configured to require at least three biometric identifiers (e.g., as plaintext data, reference biometric, or similar identifiers). If for example, an authentication session is initiated, the process can be executed so that it only continues to the subsequent steps if a sufficient number of biometric samples are taken, given, and/or acquired. The number of required biometric samples can vary, and take place with as few as one.
  • the acquired biometrics can be pre-processed at 204 (e.g., images cropped to facial features, voice sampled, iris scans cropped to relevant portions, etc.). Once pre-processing is executed the biometric information is transformed into a one-way homomorphic encryption of the biometric information to acquire the feature vectors for the biometrics under analysis (e.g., at 206). Similar to process 100, the feature vectors can be acquired using any pre-trained neural network that outputs Euclidean measurable feature vectors. In one example, this includes a pre-trained neural network that incorporates a softmax layer.
  • the pre-trained neural network does not require the pre-trained neural network to include a softmax layer, only that they output Euclidean measurable feature vectors.
  • the feature vectors can be obtain in the layer preceding the softmax layer as part of step 206.
  • a prediction (e.g., a via deep learning neural network) is executed to determine if there is a match for the person associated with the analyzed biometrics.
  • the prediction can be executed as a fully connected neural network having two hidden layers (during enrollment the neural network is configured to identify input feature vectors as individuals or unknown, and unknowns individuals can be added via incremental training or full retraining of the model).
  • a fully connected neural network having no hidden layers can be used. Examples of neural networks are described in greater detail below (e.g., Fig. 4 illustrates an example neural network 400). Other embodiments of the neural network can be used in process 200.
  • the neural network features include operates as a classifier during enrollment to map feature vectors to identifications; operates as a predictor to identify a known person or an unknown.
  • different neural networks can be tailored to different types of biometrics, and facial images processed by one, while voice biometrics are processed by another.
  • process 208 is described agnostic to submitter security.
  • process 200 relies on front end application configuration to ensure submitted biometrics are captured from the person trying to authenticate.
  • the process can be executed in local and remote settings in the same manner.
  • the execution relies on the native application or additional functionality in an application to ensure an acquired biometric represents the user to be authenticated or matched.
  • Fig. 2B illustrates an example process flow 250 showing additional details for a one to many matching execution (also referred to as prediction).
  • process 250 begins with acquisition of feature vectors (e.g., step 206 of Fig. 2A or 110 of Fig. 1).
  • the acquired feature vectors are matched against existing classifications via a deep learning neural network.
  • the deep learning neural network has been trained during enrollment on s set of individuals. The acquired feature vectors will be processed by the trained deep learning network to predict if the input is a match to known individual or does not match and returns unknown.
  • the deep learning network is a fully connected neural network (“FCNN”). In other embodiments, different network models are used for the second neural network.
  • the FCNN outputs an array of values. These values, based on their position and the value itself, determine the label or unknown. According to one embodiment, returned from a one to many case are a series of probabilities associated with the match - assuming five people in the trained data: the output layer showing probability of match by person: [0.1, 0.9, 0.3, 0.2, 0.1] yields a match on Person 2 based on a threshold set for the classifier (e.g., > .5). In another run, the output layer: [0.1, 0.6, 0.3, 0.8, 0.1] yields a match on Person 2 & Person 4 (e.g., using the same threshold).
  • the process and or system is configured to select the maximum value and yield a (probabilistic) match Person 4.
  • the output layer: [0.1, 0.2, 0.3, 0.2, 0.1] shows no match to a known person - hence an UNKNOWN person - as no values exceed the threshold. Interestingly, this may result in adding the person into the list of authorized people (e.g., via enrollment discussed above), or this may result in the person being denied access or privileges on an application.
  • process 250 is executed to determine if the person is known or not. The functions that result can be dictated by the application that requests identification of an analyzed biometrics.
  • an output layer of an UNKNOWN person looks like [-0.7, -1.7, -6.0, -4.3].
  • the hinge loss function has guaranteed that the vector output is all negative. This is the case of an UNKNOWN person.
  • the deep learning neural network must have the capability to determine if a person is UNKNOWN.
  • Other solutions that appear viable, for example, support vector machine (“SVM”) solutions break when considering the UNKNOWN case. In one example, the issue is scalability.
  • the deep learning neural network e.g., an enrollment & prediction neural network
  • the deep learning neural network is configured to train and predict in polynomial time.
  • Step 256 can be executed to vote on matching.
  • multiple images or biometrics are processed to identify a match.
  • the FCNN is configured to generate an identification on each and use each match as a vote for an individual’s identification. Once a majority is reached (e.g., at least two votes for person A) the system returns as output identification of person A.
  • a majority e.g., at least two votes for person A
  • the system returns as output identification of person A.
  • each result that exceeds the threshold probability can count as one vote, and the final tally of votes (e.g., often 4 out of 5) is used to establish the match.
  • an unknown class may be trained in the model - in the examples above a sixth number would appear with a probability of matching the unknown model.
  • the unknown class is not used, and matching is made or not against known persons. Where a sufficient match does not result, the submitted biometric information is unknown.
  • process 250 can include an optional step 258 for retraining of the classification model.
  • a threshold is set such that step 258 tests if a threshold match has been exceed, and if yes, the deep learning neural network (e.g., classifier & prediction network) is retrained to include the new feature vectors being analyzed.
  • the deep learning neural network e.g., classifier & prediction network
  • retraining to include newer feature vectors permits biometrics that change over time (e.g., weight loss, weight gain, aging or other events that alter biometric information, haircuts, among other options).
  • Fig. 3 is a block diagram of an example privacy-enabled biometric system 304.
  • the system can be installed on a mobile device or called from a mobile device (e.g., on a remote server or cloud based resource) to return an authenticated or not signal.
  • system 304 can executed any of the preceding processes. For example, system 304 can enroll users (e.g., via process 100), identify enrolled users (e.g., process 200), and search for matches to users (e.g., process 250).
  • system 304 can accept, create or receive original biometric information (e.g., input 302).
  • the input 302 can include images of people, images of faces, thumbprint scans, voice recordings, sensor data, etc.
  • a biometric processing component e.g., 308 can be configured to crop received images, sample voice biometrics, etc., to focus the biometric information on distinguishable features (e.g., automatically crop image around face).
  • Various forms of pre-processing can be executed on the received biometrics, designed to limit the biometric information to important features.
  • the pre-processing e.g., via 308) is not executed or available.
  • only biometrics that meet quality standards are passed on for further processing.
  • Processed biometrics can be used to generate additional training data, for example, to enroll a new user.
  • a training generation component 310 can be configured to generate new biometrics for a user.
  • the training generation component can be configured to create new images of the users face having different lighting, different capture angles, etc., in order to build a train set of biometrics.
  • the system includes a training threshold specifying how many training samples to generate from a given or received biometric.
  • the system and/or training generation component 310 is configured to build twenty five additional images from a picture of a user’ s face. Other numbers of training images, or voice samples, etc., can be used.
  • the system is configured to generate feature vectors from the biometrics (e.g., process images from input and generated training images).
  • the system 304 can include a feature vector component 312 configured to generate the feature vectors.
  • component 312 executes a convolution neural network (“CNN”), where the CNN includes a layer which generates Euclidean measurable output.
  • CNN convolution neural network
  • the feature vector component 312 is configured to extract the feature vectors from the layers preceding the softmax layer (including for example, the n-l layer).
  • various neural networks can be used to define features vectors tailored to an analyzed biometric (e.g., voice, image, health data, etc.), where an output of or with the model is Euclidean measurable.
  • neural network examples include model having a softmax layer.
  • Other embodiment use a model that does not include a softmax layer to generate Euclidean measurable vectors.
  • Various embodiment of the system and/or feature vector component are configured to generate and capture feature vectors for the processed biometrics in the layer or layer preceding the softmax layer.
  • the feature vectors from the feature vector component 312 or system 304 are used by the classifier component 314 to bind a user to a classification (i.e., mapping biometrics to an match able /searchable identity).
  • the deep learning neural network e.g., enrollment and prediction network
  • the FCNN generates an output identifying a person or indicating an UNKNOWN individual (e.g., at 306).
  • the deep learning neural network (e.g., which can be an FCNN) must differentiate between known persons and the UNKNOWN.
  • this can be implement as a sigmoid function in the last layer that outputs probability of class matching based on newly input biometrics or showing failure to match.
  • Other examples achieve matching based on a hinge loss functions.
  • system 304 and/or classifier component 314 are configured to generate a probability to establish when a sufficiently close match is found.
  • an unknown person is determined based on negative return values.
  • multiple matches can be developed and voting can also be used to increase accuracy in matching.
  • Various implementations of the system have the capacity to use this approach for more than one set of input.
  • the approach itself is biometric agnostic.
  • Various embodiments employ feature vectors that are Euclidean measurable, which is handled using the first neural network.
  • different neural networks are configured to process different types of biometrics.
  • the vector generating neural network may be swapped for or use a different neural network in conjunction with others where each is capable of creating a Euclidean measurable feature vector based on the respective biometric.
  • the system may enroll in both biometric types (e.g., use two or more vector generating networks) and predict on the features vectors generated for both types of biometrics using both neural networks for processing respective biometric type simultaneously.
  • feature vectors from each type of biometric can likewise be processed in respective deep learning networks configured to predict matches based on feature vector inputs or return unknown.
  • the simultaneous results e.g., one from each biometric type
  • the system can be configured to incorporate new identification classes responsive to receiving new biometric information.
  • the system 304 includes a retraining component configured to monitor a number of new biometrics (e.g., per user/identification class or by total number of new biometrics) and automatically trigger a re-enrollment with the new feature vectors derived from the new biometric information (e.g., produced by 312).
  • the system can be configured to trigger re-enrollment on new feature vectors based on time or time period elapsing.
  • the system 304 and/or retraining component 316 can be configured to store feature vectors as they are processed, and retain those feature vectors for retraining (including for example feature vectors that are unknown to retrain an unknown class in some examples).
  • Various embodiments of the system are configured to incrementally retrain the model on system assigned numbers of newly received biometrics. Further, once a system set number of incremental retraining have occurred the system is further configured to complete a full retrain of the model.
  • the variables for incremental retraining and full retraining can be set on the system via an administrative function. Some defaults include incremental retrain every 3, 4, 5, 6 identifications, and full retrain every 3, 4, 5, 6, 7, 8, 9, 10 incremental retrains. Additionally, this requirement may be met by using calendar time, such as retraining once a year. These operations can performed on offline (e.g., locked) copies of the model, and once complete the offline copy can be made live.
  • the system 304 and/or retraining component 316 is configured to update the existing classification model with new users/identification classes.
  • the system builds a classification model for an initial number of users, which can be based on an expected initial enrollment.
  • the model is generated with empty or unallocated spaces to accommodate new users. For example, a fifty user base is generated as a one hundred user model. This over allocation in the model enables incremental training to be executed on the classification model.
  • the system is and/or retraining component 316 is configured to incrementally retrain the classification model - ultimately saving significant computation time over convention retraining executions.
  • a full retrain with an additional over allocation can be made (e.g., fully retrain the 100 classes to a model with 150 classes).
  • an incremental retrain process can be executed to add additional unallocated slots.
  • the system can be configured to operate with multiple copies of the classification model.
  • One copy may be live that is used for authentication or identification.
  • a second copy may be an update version, that is taken offline (e.g., locked from access) to accomplish retraining while permitting identification operations to continue with a live model.
  • the updated model can be made live and the other model locked and updated as well. Multiple instances of both live and locked models can be used to increase concurrency.
  • system 300 can receive feature vectors instead of original biometrics and processing original biometrics can occur on different systems - in these cases system 300 may not include, for example, 308, 310, 312, and instead receive feature vectors from other systems, components or processes.
  • Figs. 4A-D illustrate example embodiments of a classifier network.
  • the embodiments show a fully connected neural network for classifying feature vectors for training and for prediction.
  • Other embodiments implement different neural networks, including for example, neural networks that are not fully connected.
  • Each of the networks accepts Euclidean measurable feature vectors and returns a label or unknown result for prediction or binds the feature vectors to a label during training.
  • Figs. 5A-D illustrate examples of processing that can be performed on input biometrics (e.g., facial image) using a neural network.
  • Feature vectors can be extracted from such neural networks and used by a classifier (e.g., Figs. 4A-D) during training or prediction operations.
  • the system implements a first pre-trained neural network for generating Euclidean measurable feature vectors that are used as inputs for a second classification neural network.
  • other neural networks are used to process biometrics in the first instance.
  • multiple neural networks can be used to generated Euclidean measurable feature vectors from unencrypted biometric inputs each may feed the feature vectors to a respective classifier.
  • each generator neural network can be tailored to a respective classifier neural network, where each pair (or multiples of each) is configured to process a biometric data type (e.g., facial image, iris images, voice, health data, etc.).
  • a biometric data type e.g., facial image,
  • Various embodiments of the privacy-enabled biometric system and/or methods provide enhancement over conventional implementation (e.g., in security, scalability, and/or management functions).
  • Various embodiments enable scalability (e.g., via“encrypted search”) and fully encrypt the reference biometric (e.g.,“encrypted match”).
  • the system is configure to provide an“identity” that is no longer tied independently to each application and a further enables a single, global“Identity Trust Store” that can service any identity request for any application.
  • a deep neural network (“DNN”) is used to process a reference biometric to compute a one-way, homomorphic encryption of the biometric’s feature vector before transmitting or storing any data.
  • DNN deep neural network
  • the plaintext data can then be discarded and the resultant homomorphic encryption is then transmitted and stored in a datastore.
  • This example allows for computations and comparisons on cipher texts without decryption and ensures that only the Euclidean measureable, homomorphic encrypted biometric is available to execute subsequent matches in the encrypted space.
  • Encrypted Search using the techniques described herein, encrypted search is done in polynomial time according to various embodiments. This allows for comparisons of biometrics and achieve values for comparison that indicate“closeness” of two biometrics to one another in the encrypted space (e.g. a biometric to a reference biometric) while at the same time providing for the highest level of privacy.
  • biometric agnostic allowing the same approach irrespective of the biometric or the biometric type.
  • biometric face, voice, IRIS, etc.
  • Each biometric can be processed with a different, fully trained, neural network to create the biometric feature vector.
  • an issue with current biometric schemes is they require a mechanism for: (1) acquiring the biometric, (2) plaintext biometric match, (3) encrypting the biometric, (4) performing a Euclidean measurable match, and (5) searching using the second neural network prediction call.
  • To execute steps 1 through 5 for every biometric is time consuming, error prone and frequently nearly impossible to do before the biometric becomes deprecated.
  • One goal with various embodiments is to develop a scheme, techniques and technologies that allow the system to work with biometrics in a privacy protected and polynomial-time based way that is also biometric agnostic.
  • Various embodiments employ machine learning to solve problems issues with (2)-(5).
  • devices such as cameras or sensors that acquire the to be analyzed biometrics (thus arriving as plain text).
  • that data is encrypted immediately and only process the biometric information as cipher text, the system provides the maximum practical level of privacy.
  • a one-way encryption of the biometric meaning that given cipher text, there is no mechanism to get to the original plaintext, reduces/eliminates the complexity of key management of various conventional approaches.
  • some capture devices can encrypt the biometric via a one way encryption and provide feature vectors directly to the system. This enables some embodiments, to forgo biometric processing components, training generation components, and feature vector generation components, or alternatively to not use these elements for already encrypted feature vectors.
  • the system is evaluated on different numbers of images per person to establish ranges of operating parameters and thresholds.
  • the num-epochs establishes the number of interactions which can be varied on the system (e.g., between embodiments, between examples, and between executions, among other options).
  • the LFW dataset is taken from the known labeled faces in the wild data set. Eleven people is a custom set of images and faces94 from the known source - faces94.
  • the epochs are the number of new images that are morphed from the original images. So if the epochs are 25, and we have 10 enrollment images, then we train with 250 images. The morphing of the images changed the lighting, angels and the like to increase the accuracy in training.
  • the neural network model is generated initially to accommodate incremental additions of new individuals to identify (e.g., 2*num_people is an example of a model initially trained for 100 people given an initial 50 individuals of biometric information).
  • the multiple or training room provides can be tailored to the specific implementation. For example, where additions to the identifiable users is anticipated to be small additional incremental training options can include any number with ranges of 1% to 200%. In other embodiments, larger percentages can be implemented as well.
  • API that can be integrated and/or called by various programs, applications, systems, system components, etc., and can be requested locally or remotely.
  • the privacy-enabled biometric API includes the following specifications:
  • Further embodiments can be configured to handle new people (e.g., labels or classes I the model) in multiple way.
  • the current model can be retrain every time (e.g., with a threshold number) a certain number of new people are introduced.
  • the benefit is improved accuracy - the system can guarantee a level of accuracy even with new people.
  • full retraining is a slow time consuming and heavy computation process. This can be mitigated with live and offline copies of the model so the retraining occurs offline and the newly retrain model is swapped for the live version.
  • training time executed in over 20 minutes. With more data the training time increases.
  • the model is initialized with slots for new people.
  • the expanded model is configured to support incremental training (e.g., the network stmcture is not changed when adding new people).
  • the time add new people is significantly reduces (even over other embodiments of the privacy-enabled biometric system). It is realized that there may be some reduction in accuracy with incremental training, and as more and more people are added the model can trends towards overfit on the new people i.e., become less accurate with old people.
  • various implementation have been tested to operate at the same accuracy even under incremental retraining.
  • Yet another embodiments implements both incremental retraining and full retraining at a threshold level (e.g., build the initial model with a multiple of the people as needed - (e.g., 2 times - 100 labels for an initial 50 people, 50 labels for an initial 25 people, etc.)).
  • a threshold level e.g., build the initial model with a multiple of the people as needed - (e.g., 2 times - 100 labels for an initial 50 people, 50 labels for an initial 25 people, etc.)
  • the system can be configured to execute a full retrain on the model, while building in the additional slots for new users.
  • the system will execute a full retrain for 150 labels and now 100 actual people. This provides for 50 additional users and incremental retraining before a full retrain is executed.
  • the system in various embodiments is configured to retrain the whole network from beginning for every N people step.
  • An example implementation of the API includes the following code:
  • num_window 50 # For every num_window people: build the model network, and people trained fully
  • num_step 1 # train incremental every new num_step people
  • incremental training can trigger concurrency problems: e.g., a multi-thread problem with the same model, thus the system can be configured to avoid retrain incrementally at the same time for two different people (data can be lost if retraining occurs concurrently).
  • the system implements a lock or a semaphore to resolve.
  • multiple models can be mnning simultaneously - and reconciliation can be executed between the models in stages.
  • the system can monitoring models to ensure only one retrain is executed one multiple live models, and in yet others use locks on the models to ensure singular updates via incremental retrain.
  • Reconciliation can be executed after an update between models.
  • the system can cache feature vectors for subsequent access in the reconciliation.
  • the system design resolves a data pipeline problem: in some examples, the data pipeline supports running one time due to queue and thread characteristics. Other embodiments, avoid this issue by extracting the embeddings. In examples, that do not include that functionality the system can still run multiple times without based on saving the embedding to file, and loading the embedding from file. This approach can be used where the extracted embedding is unavailable via other approaches.
  • Various embodiments can employ different options for operating with embeddings: when we give a value to a tensorflow, we have several ways: Feed_dict (speed trade-off for easier access); and Queue: faster via multi-threads, but can only run one time (the queue will be end after it’s looped).
  • Table VIII & TABLE IX shows execution timing during operation and accuracy percentages for the respective example.
  • system can be described broadly to include the any one or more or any combination of the following elements and associated functions:
  • Preprocessing where the system takes in an unprocessed biometric, which can include cropping and aligning and either continues processing or returns that the biometric cannot be processed.
  • Neural network 1 Pre-trained. Takes in unencrypted biometrics. Returns biometric feature vectors that are one way encrypted and Euclidean measureable. Regardless of biometric type being processed - NN1 generates Euclidean measurable encrypted feature vectors.
  • Neural network 2 Not pre-trained. It is a deep learning neural network that does classification. Includes incremental training, takes a set of label, feature vector pairs as input and returns nothing during training - the trained network is used for matching or prediction on newly input biometric information. Does prediction, which takes a feature vector as input and returns an array of values. These values, based on their position and the value itself, determine the label or unknown.
  • Voting functions can be executed with neural network 2 e.g., during prediction.
  • - System may have more than one neural network 1 for different biometrics. Each would generate Euclidean measurable encrypted feature vectors based on unencrypted input. - System may have multiple neural network 2(s) one for each biometric type.
  • FIG. 8 An illustrative implementation of a computer system 800 that may be used in connection with any of the embodiments of the disclosure provided herein is shown in FIG. 8.
  • the computer system 800 may include one or more processors 810 and one or more articles of manufacture that comprise non-transitory computer-readable storage media (e.g., memory 820 and one or more non-volatile storage media 830).
  • the processor 810 may control writing data to and reading data from the memory 820 and the non-volatile storage device 830 in any suitable manner.
  • the processor 810 may execute one or more processor-executable instructions stored in one or more non- transitory computer-readable storage media (e.g., the memory 820), which may serve as non- transitory computer-readable storage media storing processor-executable instructions for execution by the processor 810.
  • non- transitory computer-readable storage media e.g., the memory 820
  • program or“software” are used herein in a generic sense to refer to any type of computer code or set of processor-executable instructions that can be employed to program a computer or other processor to implement various aspects of embodiments as discussed above. Additionally, it should be appreciated that according to one aspect, one or more computer programs that when executed perform methods of the disclosure provided herein need not reside on a single computer or processor, but may be distributed in a modular fashion among different computers or processors to implement various aspects of the disclosure provided herein.
  • Processor-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • functionality of the program modules may be combined or distributed as desired in various embodiments.
  • data structures may be stored in one or more non-transitory computer-readable storage media in any suitable form.
  • data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a non-transitory computer-readable medium that convey relationship between the fields.
  • any suitable mechanism may be used to establish relationships among information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationships among data elements.
  • inventive concepts may be embodied as one or more processes, of which examples (e.g., the processes described with reference to Fig. 1 and 2A-2B, 9, 10, etc. ) have been provided.
  • the acts performed as part of each process may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
  • the phrase“at least one,” in reference to a list of one or more elements should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements.
  • This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase“at least one” refers, whether related or unrelated to those elements specifically identified.
  • “at least one of A and B” can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
  • a reference to“A and/or B”, when used in conjunction with open-ended language such as“comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.

Abstract

Dans un mode de réalisation, un ensemble de vecteurs de caractéristiques peut être déduit à partir de données biométriques. Puis un système d'authentification peut déterminer des correspondances ou exécuter des recherches sur des données chiffrées en utilisant un réseau neuronal profond (« DNN ») sur ces chiffrements homomorphes unidirectionnels (autrement dit sur chaque vecteur de caractéristique biométrique). Chaque vecteur de caractéristique biométrique peut ensuite être stocké et/ou utilisé en association avec des classifications respectives en vue d'une utilisation lors de comparaisons ultérieures, sans craindre d'altérer les données biométriques originales. Dans divers modes de réalisation, les données biométriques originales sont rejetées en réponse à la génération des valeurs chiffrées. Dans un autre mode de réalisation, le chiffrement homomorphe permet des calculs et des comparaisons sur un texte chiffré sans déchiffrement des vecteurs de caractéristiques chiffrés. La sécurité de telles données biométriques respectant la confidentialité peut être augmentée en appliquant un facteur d'assurance (par exemple le caractère vivant) de façon à établir que les données biométriques soumises n'ont été ni usurpées ni falsifiées.
PCT/US2019/021100 2018-03-07 2019-03-07 Systèmes et procédés de traitement biométrique respectant la confidentialité WO2019173562A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CA3092941A CA3092941A1 (fr) 2018-03-07 2019-03-07 Systemes et procedes de traitement biometrique respectant la confidentialite
EP19712657.6A EP3762867A1 (fr) 2018-03-07 2019-03-07 Systèmes et procédés de traitement biométrique respectant la confidentialité
AU2019230043A AU2019230043A1 (en) 2018-03-07 2019-03-07 Systems and methods for privacy-enabled biometric processing

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US15/914,969 2018-03-07
US15/914,436 US10419221B1 (en) 2018-03-07 2018-03-07 Systems and methods for privacy-enabled biometric processing
US15/914,942 US10721070B2 (en) 2018-03-07 2018-03-07 Systems and methods for privacy-enabled biometric processing
US15/914,969 US11138333B2 (en) 2018-03-07 2018-03-07 Systems and methods for privacy-enabled biometric processing
US15/914,436 2018-03-07
US15/914,562 US11392802B2 (en) 2018-03-07 2018-03-07 Systems and methods for privacy-enabled biometric processing
US15/914,942 2018-03-07
US15/914,562 2018-03-07
US16/218,139 US11210375B2 (en) 2018-03-07 2018-12-12 Systems and methods for biometric processing with liveness
US16/218,139 2018-12-12

Publications (1)

Publication Number Publication Date
WO2019173562A1 true WO2019173562A1 (fr) 2019-09-12

Family

ID=65861720

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/021100 WO2019173562A1 (fr) 2018-03-07 2019-03-07 Systèmes et procédés de traitement biométrique respectant la confidentialité

Country Status (4)

Country Link
EP (1) EP3762867A1 (fr)
AU (1) AU2019230043A1 (fr)
CA (1) CA3092941A1 (fr)
WO (1) WO2019173562A1 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111723395A (zh) * 2020-05-11 2020-09-29 华南理工大学 一种人像生物特征隐私保护与解密方法
CN112000940A (zh) * 2020-09-11 2020-11-27 支付宝(杭州)信息技术有限公司 一种隐私保护下的用户识别方法、装置以及设备
US20210211291A1 (en) * 2020-01-08 2021-07-08 Tata Consultancy Services Limited Registration and verification of biometric modalities using encryption techniques in a deep neural network
WO2022084039A1 (fr) * 2020-10-23 2022-04-28 Dormakaba Schweiz Ag Procédé et système de mise à jour d'un système d'identification d'utilisateur
CN115083413A (zh) * 2022-08-17 2022-09-20 广州小鹏汽车科技有限公司 语音交互方法、服务器和存储介质
EP4032015A4 (fr) * 2019-09-17 2023-10-04 Private Identity LLC Systèmes et procédés de traitement biométrique respectant la confidentialité
US11899765B2 (en) 2019-12-23 2024-02-13 Dts Inc. Dual-factor identification system and method with adaptive enrollment
US11943364B2 (en) 2018-03-07 2024-03-26 Private Identity Llc Systems and methods for privacy-enabled biometric processing

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792733B (zh) * 2021-09-17 2023-07-21 平安科技(深圳)有限公司 车辆部件检测方法、系统、电子设备及存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SCOTT STREIT ET AL: "Privacy-Enabled Biometric Search", 16 August 2017 (2017-08-16), XP055595233, Retrieved from the Internet <URL:https://arxiv.org/ftp/arxiv/papers/1708/1708.04726.pdf> [retrieved on 20190611] *
XUE-WEN CHEN ET AL: "Learning Multi-channel Deep Feature Representations for Face Recognition", JMLR: WORKSHOP AND CONFERENCE PROCEEDINGS, 1 January 2015 (2015-01-01), pages 60 - 71, XP055595297, Retrieved from the Internet <URL:http://proceedings.mlr.press/v44/chen15learning.pdf> [retrieved on 20190611] *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11943364B2 (en) 2018-03-07 2024-03-26 Private Identity Llc Systems and methods for privacy-enabled biometric processing
EP4032015A4 (fr) * 2019-09-17 2023-10-04 Private Identity LLC Systèmes et procédés de traitement biométrique respectant la confidentialité
US11899765B2 (en) 2019-12-23 2024-02-13 Dts Inc. Dual-factor identification system and method with adaptive enrollment
US20210211291A1 (en) * 2020-01-08 2021-07-08 Tata Consultancy Services Limited Registration and verification of biometric modalities using encryption techniques in a deep neural network
EP3848790A1 (fr) * 2020-01-08 2021-07-14 Tata Consultancy Services Limited Enregistrement et vérification de modalités biométriques à l'aide de techniques de cryptage dans un réseau neuronal profond
US11615176B2 (en) * 2020-01-08 2023-03-28 Tata Consultancy Services Limited Registration and verification of biometric modalities using encryption techniques in a deep neural network
CN111723395A (zh) * 2020-05-11 2020-09-29 华南理工大学 一种人像生物特征隐私保护与解密方法
CN111723395B (zh) * 2020-05-11 2022-11-18 华南理工大学 一种人像生物特征隐私保护与解密方法
CN112000940A (zh) * 2020-09-11 2020-11-27 支付宝(杭州)信息技术有限公司 一种隐私保护下的用户识别方法、装置以及设备
WO2022084039A1 (fr) * 2020-10-23 2022-04-28 Dormakaba Schweiz Ag Procédé et système de mise à jour d'un système d'identification d'utilisateur
CN115083413A (zh) * 2022-08-17 2022-09-20 广州小鹏汽车科技有限公司 语音交互方法、服务器和存储介质
CN115083413B (zh) * 2022-08-17 2022-12-13 广州小鹏汽车科技有限公司 语音交互方法、服务器和存储介质

Also Published As

Publication number Publication date
CA3092941A1 (fr) 2019-09-12
EP3762867A1 (fr) 2021-01-13
AU2019230043A1 (en) 2020-09-24

Similar Documents

Publication Publication Date Title
US11762967B2 (en) Systems and methods for biometric processing with liveness
US11943364B2 (en) Systems and methods for privacy-enabled biometric processing
US11362831B2 (en) Systems and methods for privacy-enabled biometric processing
US11394552B2 (en) Systems and methods for privacy-enabled biometric processing
US11502841B2 (en) Systems and methods for privacy-enabled biometric processing
US11640452B2 (en) Systems and methods for privacy-enabled biometric processing
US10419221B1 (en) Systems and methods for privacy-enabled biometric processing
US11392802B2 (en) Systems and methods for privacy-enabled biometric processing
EP3762867A1 (fr) Systèmes et procédés de traitement biométrique respectant la confidentialité
US11789699B2 (en) Systems and methods for private authentication with helper networks
US10027663B2 (en) Anonymizing biometric data for use in a security system
EP4032015A1 (fr) Systèmes et procédés de traitement biométrique respectant la confidentialité
AU2020328023A1 (en) Systems and methods for privacy-enabled biometric processing
EP4196890A1 (fr) Systèmes et procédés d&#39;authentification privée avec des réseaux auxiliaires

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19712657

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3092941

Country of ref document: CA

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019230043

Country of ref document: AU

Date of ref document: 20190307

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2019712657

Country of ref document: EP

Effective date: 20201007