WO2021184898A1 - Procédé, appareil et dispositif d'extraction de caractéristiques faciales - Google Patents

Procédé, appareil et dispositif d'extraction de caractéristiques faciales Download PDF

Info

Publication number
WO2021184898A1
WO2021184898A1 PCT/CN2020/140574 CN2020140574W WO2021184898A1 WO 2021184898 A1 WO2021184898 A1 WO 2021184898A1 CN 2020140574 W CN2020140574 W CN 2020140574W WO 2021184898 A1 WO2021184898 A1 WO 2021184898A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature extraction
user
extraction model
vector
face
Prior art date
Application number
PCT/CN2020/140574
Other languages
English (en)
Chinese (zh)
Inventor
徐崴
Original Assignee
支付宝(杭州)信息技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 支付宝(杭州)信息技术有限公司 filed Critical 支付宝(杭州)信息技术有限公司
Publication of WO2021184898A1 publication Critical patent/WO2021184898A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • One or more embodiments of this specification relate to the field of computer technology, and in particular to a method, device, and device for extracting facial features.
  • one or more embodiments of this specification provide a method, device and device for extracting facial features, which are used to extract the facial features of a user on the basis of ensuring the privacy of the facial information of the user.
  • the embodiment of this specification provides a face feature extraction method, which uses a user feature extraction model for privacy protection.
  • the user feature extraction model includes an encoder and a face feature extraction model.
  • the feature extraction model is a model obtained by locking a decoder and a feature extraction model based on a convolutional neural network, where the encoder and the decoder constitute an autoencoder; the encoder and the face The decoder in the feature extraction model is connected, and the decoder is connected to the feature extraction model; the method includes: inputting the face image of the user to be identified into the encoder to obtain the person output by the encoder
  • the encoding vector of the face image where the encoding vector is vector data obtained after characterizing the face image; after receiving the encoding vector, the decoder in the facial feature extraction model extracts the features from the encoding vector
  • the model outputs reconstructed face image data; so that after the feature extraction model performs characterization processing on the reconstructed face image data, it outputs the face feature vector of the user
  • the embodiment of this specification provides a training method for a user feature extraction model for privacy protection, the method includes: obtaining a first training sample set, and the training samples in the first training sample set are face images; using The first training sample set trains the initial autoencoder to obtain the trained autoencoder; obtains a second training sample set, and the training samples in the second training sample set are coding vectors, and the coding vector is The vector data obtained after the face image is characterized by the encoder in the trained autoencoder is used; the training samples in the second training sample set are input into the decoder of the initial face feature extraction model , So as to use the reconstructed face image data output by the decoder to train the initial feature extraction model based on the convolutional neural network in the initial face feature extraction model to obtain the trained face feature extraction model; The initial facial feature extraction model is obtained by locking the decoder and the initial feature extraction model, and the decoder is a decoder in the trained autoencoder; according to the encoding And the trained face feature extraction model to
  • An embodiment of this specification provides a face feature extraction device, which uses a user feature extraction model for privacy protection, and the user feature extraction model includes an encoder and a face feature extraction model.
  • the feature extraction model is a model obtained by locking a decoder and a feature extraction model based on a convolutional neural network, where the encoder and the decoder constitute an autoencoder; the encoder and the face The decoder in the feature extraction model is connected, and the decoder is connected with the feature extraction model;
  • the device includes: an input module for inputting the face image of the user to be identified into the encoder to obtain the encoder The output encoding vector of the face image, where the encoding vector is vector data obtained after characterizing the face image; the face feature vector generation module is used to make the face feature extraction model After receiving the encoding vector, the decoder in the output of the reconstructed face image data to the feature extraction model; so that the feature extraction model performs characterization processing on the reconstructed face image data, and then outputs the user to
  • An embodiment of this specification provides a training device for a user feature extraction model for privacy protection.
  • the device includes: a first acquisition module configured to acquire a first training sample set.
  • the training sample is a face image;
  • the first training module is used to train the initial autoencoder using the first training sample set to obtain the trained autoencoder;
  • the second acquisition module is used to obtain the second training sample set
  • the training samples in the second training sample set are coding vectors, and the coding vectors are vector data obtained by characterizing a face image using an encoder in the trained autoencoder;
  • second The training module is used to input the training samples in the second training sample set into the decoder of the initial face feature extraction model, so as to use the reconstructed face image data output by the decoder to compare the initial face
  • the initial feature extraction model based on the convolutional neural network in the feature extraction model is trained to obtain a trained face feature extraction model; the initial face feature extraction model is performed by comparing the decoder and the initial feature extraction model Ob
  • a client device includes: at least one processor; and a memory communicatively connected with the at least one processor; wherein the memory stores an image encoder and can be used
  • the instruction executed by the processor, the image encoder is an encoder in a self-encoder, and the instruction is executed by the at least one processor, so that the at least one processor can: Input to the image encoder to obtain the encoding vector of the face image output by the image encoder, where the encoding vector is vector data obtained after characterizing the face image; sending the encoding vector To the server device, so that the server device uses the face feature extraction model to generate the face feature vector of the user to be identified according to the encoding vector, and the face feature extraction model The model obtained by locking the decoder and the feature extraction model based on the convolutional neural network.
  • a server device provided by an embodiment of this specification includes: at least one processor; and a memory communicatively connected with the at least one processor; wherein the memory stores a facial feature extraction model, and the facial feature
  • the extraction model is a model obtained by locking a decoder in self-encoding and a feature extraction model based on a convolutional neural network.
  • the memory also stores instructions that can be executed by the at least one processor, and the instructions are The at least one processor executes, so that the at least one processor can: obtain a coding vector of the face image of the user to be recognized, and the coding vector is to use the encoder in the autoencoder to perform the The vector data obtained by image characterization processing; after the coding vector is input to the decoder in the facial feature extraction model, the decoder outputs the reconstructed facial image data to the feature extraction model; After the feature extraction model performs characterization processing on the reconstructed face image data, it outputs the face feature vector of the user to be identified.
  • An embodiment of this specification provides a training device for a facial feature extraction model for privacy protection, including: at least one processor; and, a memory communicatively connected to the at least one processor; wherein the memory stores There is an instruction executable by the at least one processor, the instruction is executed by the at least one processor, so that the at least one processor can: obtain a first training sample set, in the first training sample set The training sample of is a face image; the initial autoencoder is trained using the first training sample set to obtain the trained autoencoder; the second training sample set is obtained, and the training samples in the second training sample set are Encoding vector, the encoding vector is the vector data obtained after the face image is characterized by the encoder in the trained autoencoder; the training samples in the second training sample set are input into the initial person In the decoder of the face feature extraction model, it is convenient to use the reconstructed face image data output by the decoder to train the initial feature extraction model based on the convolutional neural network in the initial face feature extraction model to obtain training After
  • An embodiment of this specification achieves the following beneficial effects: because the encoding vector of the face image generated by the encoder in the autoencoder is transmitted, stored or used, the privacy and security of the user's face information will not be compromised. Sex has an impact. Therefore, the service provider can obtain and process the encoding vector of the face image of the user to be identified to generate the face feature vector of the user to be identified without having to obtain the original face image of the user to be identified, thereby ensuring that the user Based on the privacy and security of the facial information, the user's facial feature vector is extracted.
  • the face feature extraction model used to extract the face feature vector is a model obtained by locking the decoder in the autoencoder and the feature extraction model based on the convolutional neural network
  • the face feature extraction model is In the process of extracting the user's face feature vector, the reconstructed face image data generated by the decoder in the encoder will not be leaked, so as to ensure the privacy and security of the user's face information.
  • FIG. 1 is a schematic flowchart of a method for extracting facial features according to an embodiment of this specification
  • FIG. 2 is a schematic structural diagram of a face feature extraction model for privacy protection provided by an embodiment of this specification
  • FIG. 3 is a schematic flowchart of a training method for a facial feature extraction model for privacy protection provided by an embodiment of this specification
  • FIG. 4 is a schematic structural diagram of a face feature extraction device corresponding to FIG. 1 provided by an embodiment of this specification;
  • Fig. 5 is a training device for a facial feature extraction model for privacy protection provided by an embodiment of the specification and corresponding to Fig. 3.
  • the user's face image is usually preprocessed before extracting the face feature vector.
  • the principal component information is first extracted from the user's face picture, and part of the detailed information is discarded to generate facial features based on the principal component information vector.
  • the face feature vector generated based on this method has the problem of loss of face feature information. It can be seen that the accuracy of the currently extracted face feature vector is also poor.
  • FIG. 1 is a schematic flowchart of a method for extracting facial features provided by an embodiment of this specification.
  • the method uses a face feature extraction model for privacy protection to extract face feature vectors.
  • FIG. 2 is a schematic structural diagram of a face feature extraction model for privacy protection provided by an embodiment of this specification.
  • the user feature extraction model 201 for privacy protection includes: an encoder 202 and face features
  • the extraction model 203, the face feature extraction model is a model obtained by locking the decoder 204 and the feature extraction model 205 based on the convolutional neural network, wherein the encoder 202 and the decoder 204 form a self-encoding Device.
  • the encoder 202 is connected to the decoder 204 in the face feature extraction model 201, and the decoder 204 is connected to the feature extraction model 205.
  • the execution subject of the process shown in FIG. 1 may be a user's facial feature extraction system or a program carried on the user's facial feature extraction system.
  • the user facial feature extraction system may include a client device and a server device.
  • the client device may be equipped with an encoder in a face feature extraction model for privacy protection
  • the server device may be equipped with a face feature extraction model in a face feature extraction model for privacy protection.
  • the process may include step 102 to step 104.
  • Step 102 Input the face image of the user to be identified into the encoder to obtain the encoding vector of the face image output by the encoder, where the encoding vector is obtained after characterizing the face image The vector data.
  • a user when a user uses various applications, he usually needs to register an account at each application.
  • the user logs in or unlocks the registered account, or the user uses the registered account to make payments, it is usually necessary to perform user identification on the operating user of the registered account (that is, the user to be identified), and confirm that the user to be identified is the registered user. Only after the authenticated user (that is, the designated user) of the account, the user to be identified is allowed to perform subsequent operations. Or, for the scenario where the user needs to pass through the access control system, the user usually needs to be identified, and the user (ie the user to be identified) is determined to be the whitelisted user (ie the designated user) of the access control system before the user is allowed Through the access control system.
  • the client device When performing user recognition on the user to be identified based on the face recognition technology, the client device usually needs to collect the face image of the user to be identified, and extract the encoding vector of the face image by using the encoder mounted on it.
  • the client device may send the encoding vector to the server device, so that the server device generates the facial feature vector of the user to be identified according to the encoding vector, and then performs user identification based on the generated facial feature vector of the user to be identified.
  • the encoder in step 102 may be an encoder in an auto encoder (AE).
  • the autoencoder is a network model structure in deep learning. Its characteristic is that the input image itself can be used as supervision information, and the input image is reconstructed as the goal for network training, so as to achieve the purpose of encoding the input image (encoding) . Since the autoencoder does not need other information except the input image as the supervision information in the network training, the training cost of the autoencoder is low, and it is economical and practical.
  • the autoencoder usually includes two parts: an encoder and a decoder.
  • the encoder in the self-encoder can be used to encode the face image to obtain the encoding vector of the face image
  • the decoder in the self-encoder can perform the encoding process on the face image according to the encoding vector. Reconstruction to obtain a reconstructed face image.
  • the service provider transmits, stores, and processes the encoding vector of the face image, and will not affect the security and privacy of the face information of the identified user.
  • the autoencoder is an artificial neural network that can learn input data through unsupervised learning, and can efficiently and accurately represent the input data. Therefore, the face feature information contained in the encoding vector of the face image generated by the encoder in the autoencoder is more comprehensive and the noise is small, so that the face image generated by the encoder in the autoencoder is
  • the encoding vector is used to extract the face feature vector, the accuracy of the obtained face feature vector can be improved, which in turn helps to improve the accuracy of the user recognition result generated based on the face feature vector.
  • the face image of the user to be identified may be a multi-channel face image.
  • the single-channel image data of the user to be identified can be determined first; the user to be identified is generated based on the single-channel image data
  • the image data of each channel of the multi-channel face image of the user to be identified is the same as The single-channel image data is the same.
  • Step 104 After receiving the encoding vector, the decoder in the face feature extraction model outputs reconstructed face image data to the feature extraction model; so that the feature extraction model can perform processing on the reconstructed face image data. After the characterization process, the facial feature vector of the user to be recognized is output.
  • the training goal of the autoencoder is to minimize the difference between the reconstructed face image and the original face image, it is not used to classify the user's face. Therefore, if the autoencoder is used
  • the encoding vector of the face image extracted by the encoder in the device is directly used as the face feature vector of the user to be identified to perform user identification, which will make the accuracy of the user identification result poor.
  • a facial feature extraction model obtained by locking a decoder in self-encoding and a feature extraction model based on a convolutional neural network can be deployed on the server device. Since the decoder in the self-encoding can be used to generate reconstructed face image data according to the encoding vector of the face image of the user to be recognized, and the feature extraction model based on the convolutional neural network can classify the reconstructed face image data, so , The output vector of the feature extraction model based on the convolutional neural network can be used as the facial feature vector of the user to be recognized, so as to improve the accuracy of the user recognition result generated based on the facial feature vector of the user to be recognized.
  • the feature extraction model based on the convolutional neural network in the face feature extraction model is used to extract the face feature vector from the reconstructed face image
  • the feature extraction based on the convolutional neural network can be implemented using existing face recognition models based on convolutional neural networks, for example, DeepFace, FaceNet, MTCNN, RetinaFace, etc. It can be seen that the compatibility of the face feature extraction model is better.
  • the decoder in the facial feature extraction model decrypts the encoding vector of the face image of the user to be identified, the reconstructed face image data obtained after decryption processing has a high degree of similarity with the face image of the user to be identified , So that the accuracy of the facial feature vector of the user to be recognized extracted by the feature extraction model based on the convolutional neural network is better.
  • encryption software can be used to lock the decoder in the autoencoder and the feature extraction model based on the convolutional neural network, or the decoder in the autoencoder and the convolutional neural network-based feature extraction model can be locked.
  • the feature extraction model is stored in the security hardware module of the device, so that the user cannot read the reconstructed face image data output by the decoder, thereby ensuring the privacy of the user's face information.
  • there are many ways to achieve locking of the decoder in the autoencoder and the feature extraction model which is not specifically limited, and it only needs to ensure that the reconstructed face output by the decoder in the autoencoder is guaranteed.
  • the security of the image data is sufficient.
  • the service provider or other users after the service provider or other users obtain the read permission for the reconstructed face image data of the user to be identified, they can also obtain the output of the decoder in the facial feature extraction model based on the read permission. Reconstruction of face image data is beneficial to improve the utilization of data.
  • the service provider can extract the facial feature vector from the encoding vector of the facial image of the user to be identified, there is no need to obtain the facial image of the user to be identified, which avoids the service provider from treating the user to be identified.
  • the transmission, storage and use of facial images to ensure the privacy and security of the facial information of the users to be identified.
  • the feature extraction model based on the convolutional neural network can be used to reconstruct the face from the face.
  • the accuracy of the facial feature vector of the user to be recognized extracted from the image is better.
  • the encoder may include: an input layer, a first hidden layer, and a bottleneck layer in the self-encoder
  • the decoder may include: a second hidden layer and an output layer in the self-encoder
  • the input layer of the encoder is connected to the first hidden layer
  • the first hidden layer is connected to the bottleneck layer
  • the bottleneck layer of the encoder is connected to the second hidden layer of the decoder
  • the second hidden layer is connected to the output layer
  • the output layer is connected to the feature extraction model.
  • the input layer may be used to receive the face image of the user to be identified.
  • the first hidden layer may be used to perform encoding processing on the face image to obtain a first feature vector.
  • the bottleneck layer may be used to perform dimensionality reduction processing on the first feature vector to obtain the coding vector of the face image, where the number of dimensions of the coding vector is smaller than the number of dimensions of the first feature vector.
  • the second hidden layer may be used to decode the encoding vector to obtain the second feature vector.
  • the output layer may be used to generate reconstructed face image data according to the second feature vector.
  • the first The hidden layer and the second hidden layer may include multiple convolutional layers, and the first hidden layer and the second hidden layer may also include a pooling layer and a fully connected layer.
  • the bottleneck layer (Bottleneck layer) can be used to reduce the feature dimension. The dimension of the feature vector output by the hidden layer connected to the bottleneck layer is higher than the dimension of the feature vector output by the bottleneck layer.
  • the feature extraction model based on the convolutional neural network may include: an input layer, a convolutional layer, a fully connected layer, and an output layer; wherein the input layer is connected to the output of the decoder, so The input layer is also connected to the convolutional layer, the convolutional layer is connected to the fully connected layer, and the fully connected layer is connected to the output layer.
  • the input layer may be used to receive the reconstructed face image data output by the decoder; the convolutional layer may be used to perform local feature extraction on the reconstructed face image data to obtain the information of the user to be identified Human face local feature vector; the fully connected layer can be used to generate the face feature vector of the user to be identified according to the local face feature vector.
  • the output layer may be used to generate a face classification result according to the face feature vector of the user to be recognized output by the fully connected layer.
  • the facial feature vector of the user to be recognized may be the output vector of the fully connected layer adjacent to the output layer; or, when the fully connected layer in the feature extraction model based on the convolutional neural network
  • the facial feature vector of the user to be recognized may also be an output vector of a fully connected layer separated from the output layer by N network layers; this is not specifically limited.
  • the face feature extraction model may further include a user matching model, and the input of the user matching model may be connected with the output of the feature extraction model based on the convolutional neural network in the face feature extraction model.
  • step 104 it may further include: making the user matching model receive the facial feature vector of the user to be identified and the facial feature vector of the designated user, and according to the facial feature vector of the user to be identified and the designated user The vector distance between the facial feature vectors of the users is generated to indicate whether the user to be identified is the designated user or not, wherein the facial feature vector of the designated user uses the encoder and the human
  • the facial feature extraction model is obtained by processing the face image of the specified user.
  • the vector distance between the face feature vector of the user to be identified and the face feature vector of the specified user can be used to indicate the distance between the face feature vector of the user to be identified and the face feature vector of the specified user ⁇ similarity. Specifically, when the vector distance is less than or equal to the threshold, it can be determined that the user to be identified and the designated user are the same user. When the vector distance is greater than the threshold, it can be determined that the user to be identified and the designated user are different users.
  • the threshold can be determined according to actual needs, and there is no specific limitation on this.
  • the method in FIG. 1 may be used to generate the facial feature vector of the user to be recognized and the facial feature vector of the designated user. Since the accuracy of the user's face feature vector generated based on the method in FIG. 1 is better, it is beneficial to improve the accuracy of the user recognition result.
  • FIG. 3 is a schematic flowchart of a method for training a face recognition model provided by an embodiment of this specification. From a program perspective, the main body of execution of the process can be a server or a program loaded on a server. As shown in FIG. 3, the process may include step 302 to step 310.
  • Step 302 Obtain a first training sample set, and the training samples in the first training sample set are face images.
  • the training samples in the first training sample set are face images for which use rights have been obtained.
  • face images for which use rights have been obtained.
  • public face images in the face database or face pictures authorized by users, etc. to ensure that the training process of the face recognition model does not affect the privacy of the user's face information.
  • the training samples in the first training sample set may be multi-channel face images.
  • the single-channel image data of the face image may be determined first; a multi-channel image is generated according to the single-channel image data to use the multi-channel image as a training sample in the first training sample set.
  • the image data of each channel is the same as the single-channel image data, thereby ensuring the consistency of the training samples in the first training sample set.
  • Step 304 Use the first training sample set to train the initial autoencoder to obtain the trained autoencoder.
  • step 304 may specifically include: for each training sample in the first training sample set, input the training sample to the initial autoencoder to obtain reconstructed face image data to minimize
  • the image reconstruction loss is the target, and the model parameters of the initial autoencoder are optimized to obtain the trained autoencoder; the image reconstruction loss is the difference between the reconstructed face image data and the training sample .
  • the input layer, the first hidden layer, and the bottleneck layer in the self-encoder constitute an encoder
  • the second hidden layer and the output layer in the self-encoder constitute a decoder.
  • the encoder can be used to encode the face image to obtain the encoding vector of the face image.
  • the decoder can decode the code vector generated by the encoder to obtain a reconstructed face image.
  • the function of each layer of the self-encoder may be the same as the function of each layer of the self-encoder mentioned in the embodiment of the method in FIG. 1, which will not be repeated here.
  • Step 306 Obtain a second training sample set, the training samples in the second training sample set are coding vectors, and the coding vector is to use the encoder in the trained autoencoder to characterize the face image The vector data obtained after processing.
  • the training samples in the second training sample set may be vectors obtained by using the encoder in the trained autoencoder to characterize the face image of the user who needs privacy protection. data.
  • users who need privacy protection can be determined according to actual needs. For example, the operating user and the authenticated user of the registered account at the application site. Or, users to be identified and whitelisted users at the entrance guard based on face recognition technology.
  • the encoder in the trained autoencoder can be used in advance to generate and store the training samples in the second training sample set.
  • step 306 it is only necessary to extract the training samples in the second training sample set generated in advance from the database. Since the training samples in the second training sample set stored in the database are the coding vector of the user’s face image, and the coding vector cannot reflect the appearance information of the user to be recognized, the service provider should The transmission, storage and processing of training samples will not affect the privacy of the user's face information.
  • Step 308 Input the training samples in the second training sample set into the decoder of the initial facial feature extraction model, so that the reconstructed facial image data output by the decoder can be used to extract the initial facial features
  • the initial feature extraction model based on the convolutional neural network in the model is trained to obtain a trained face feature extraction model; the initial face feature extraction model is performed by locking the decoder and the initial feature extraction model And obtained, the decoder is the decoder in the self-encoder after training.
  • the model parameters of the decoder in the initial face feature extraction model, but only the initial feature extraction based on the convolutional neural network.
  • the model parameters of the model can be optimized.
  • the using the reconstructed face image data output by the decoder to train the initial feature extraction model based on the convolutional neural network in the initial face feature extraction model may specifically include: using the initial feature extraction model Performing classification processing on the reconstructed face image data to obtain the predicted value of the category label of the reconstructed face image data; obtaining the preset value of the category label for the reconstructed face image data; aiming at minimizing the classification loss,
  • the model parameters of the initial feature extraction model are optimized, and the classification loss is a difference value between the predicted value of the category label and the preset value of the category label.
  • Step 310 Generate a user feature extraction model for privacy protection according to the encoder and the trained face feature extraction model.
  • the input of the encoder is used to receive the face image of the user to be recognized, and the output of the encoder is connected with the input of the decoder in the trained face feature extraction model, and The output of the decoder is connected to the input of the feature extraction model based on the convolutional neural network in the trained face feature extraction model, and the output of the feature extraction model based on the convolutional neural network is the face of the user to be recognized Feature vector.
  • the autoencoder and the initial facial feature extraction model are trained to build a face feature extraction for privacy protection based on the trained autoencoder and the trained initial facial feature extraction model Model. Since the autoencoder does not need other information except the input image as the supervision information in the network training, the training cost of the face feature extraction model for privacy protection can be reduced, which is economical and practical.
  • the user feature extraction model generated by the method in FIG. 3 can be applied to a user recognition scenario. After the user feature extraction model is used to extract the user's face feature vector, it is usually necessary to compare the user's face feature vector to generate the final user recognition result.
  • the facial feature extraction model for privacy protection in step 310 may further include: establishing a user matching model, the user matching model being used to match the first facial feature vector of the user to be identified with the specified user The vector distance between the second face feature vectors of the, to generate an output result indicating whether the user to be recognized is the specified user, and the first face feature vector is obtained by using the encoder and the trained The face feature extraction model is obtained by processing the face image of the user to be recognized, and the second face feature vector is obtained by using the encoder and the trained face feature extraction model to analyze the specified user Is obtained by processing the face image of;
  • Step 310 may specifically include: generating a user feature extraction model for privacy protection composed of the encoder, the trained facial feature extraction model, and the user matching model.
  • FIG. 4 is a schematic structural diagram of a facial feature extraction device corresponding to FIG. 1 provided by an embodiment of this specification.
  • the device uses a user feature extraction model for privacy protection.
  • the user feature extraction model may include an encoder and a face feature extraction model.
  • the face feature extraction model is based on a decoder and a convolutional neural network.
  • a model obtained by locking the feature extraction model of the network wherein the encoder and the decoder form a self-encoder; the encoder is connected to the decoder in the face feature extraction model, and the decoder Connected to the feature extraction model;
  • the device may include: an input module 402, which may be used to input the face image of the user to be identified into the encoder to obtain the encoding vector of the face image output by the encoder
  • the encoding vector is vector data obtained after characterizing the face image;
  • the face feature vector generating module 404 may be used to make the decoder in the face feature extraction model receive the encoding vector Then, output the reconstructed face image data to the feature extraction model; so that the feature extraction model performs characterization processing on the reconstructed face image data, and then outputs the face feature vector of the user to be identified.
  • the encoder may include: an input layer, a first hidden layer, and a bottleneck layer in the self-encoder
  • the decoder may include: a second hidden layer and an output layer in the self-encoder;
  • the input layer of the encoder is connected to the first hidden layer, the first hidden layer is connected to the bottleneck layer, the bottleneck layer of the encoder is connected to the second hidden layer of the decoder, and the first hidden layer is connected to the second hidden layer of the decoder.
  • the second hidden layer is connected to the output layer, and the output layer is connected to the feature extraction model.
  • the input layer in the self-encoder may be used to receive the face image of the user to be identified; the first hidden layer may be used to encode the face image to obtain a first feature vector; The bottleneck layer may be used to perform dimensionality reduction processing on the first feature vector to obtain an encoding vector of the face image, where the number of dimensions of the encoding vector is less than the number of dimensions of the first feature vector; The second hidden layer can be used to decode the encoding vector to obtain a second feature vector; the output layer can be used to generate reconstructed face image data according to the second feature vector.
  • the feature extraction model based on the convolutional neural network may include: an input layer, a convolutional layer, and a fully connected layer; wherein the input layer is connected to the output of the decoder, and the input layer is also connected to The convolutional layer is connected, and the convolutional layer is connected to the fully connected layer.
  • the input layer of the feature extraction model based on the convolutional neural network may be used to receive reconstructed face image data output by the decoder; the convolutional layer may be used to perform local features on the reconstructed face image data Extraction to obtain the facial feature vector of the user to be recognized; the fully connected layer is used to generate the facial feature vector of the user to be recognized based on the facial feature vector.
  • the user feature extraction model may further include a user matching model, and the user matching model is connected to the feature extraction model; the device may further include:
  • the user matching module is configured to enable the user matching model to receive the facial feature vector of the user to be identified and the facial feature vector of the designated user, and then according to the facial feature vector of the user to be identified and the facial feature vector of the designated user
  • the vector distance between the face feature vectors is used to generate output information indicating whether the user to be identified is the specified user, wherein the face feature vector of the specified user uses the encoder and the face feature
  • the extraction model is obtained by processing the face image of the specified user.
  • Fig. 5 is a training device for a facial feature extraction model for privacy protection provided by an embodiment of the specification and corresponding to Fig. 3. As shown in Figure 5, the device may include the following modules.
  • the first obtaining module 502 is configured to obtain a first training sample set, and the training samples in the first training sample set are face images.
  • the first training module 504 is configured to use the first training sample set to train the initial autoencoder to obtain the trained autoencoder.
  • the second acquisition module 506 is configured to acquire a second training sample set, and the training samples in the second training sample set are coding vectors, and the coding vectors are used for coding the encoder in the trained autoencoder.
  • the vector data obtained after the face image is characterized.
  • the second training module 508 is configured to input the training samples in the second training sample set into the decoder of the initial facial feature extraction model, so as to use the reconstructed face image data output by the decoder to perform the
  • the initial feature extraction model based on the convolutional neural network in the initial face feature extraction model is trained to obtain the trained face feature extraction model; the initial face feature extraction model is performed by comparing the decoder and the initial It is obtained by locking the feature extraction model, and the decoder is the decoder in the trained autoencoder.
  • the user feature extraction model generation module 510 is configured to generate a user feature extraction model for privacy protection according to the encoder and the trained face feature extraction model.
  • the first training module 504 may be specifically configured to: for each training sample in the first training sample set, input the training sample into the initial autoencoder to obtain a reconstructed face image Data; with the goal of minimizing image reconstruction loss, the model parameters of the initial autoencoder are optimized to obtain a trained autoencoder; the image reconstruction loss is the reconstructed face image data and the training sample The difference value between.
  • using the reconstructed face image data output by the decoder to train an initial feature extraction model based on a convolutional neural network in the initial face feature extraction model may specifically include: using the The initial feature extraction model classifies the reconstructed face image data to obtain the predicted value of the category label of the reconstructed face image data; obtains the preset value of the category label for the reconstructed face image data; to minimize the classification Loss is the target, the model parameters of the initial feature extraction model are optimized, and the classification loss is the difference between the predicted value of the category label and the preset value of the category label.
  • the apparatus in FIG. 5 may further include: a user matching model establishment module, configured to establish a user matching model, the user matching model being used to identify the user’s first facial feature vector and the specified user’s second facial feature vector.
  • the vector distance between the face feature vectors is generated to indicate whether the user to be recognized is the specified user or not, and the first face feature vector uses the encoder and the trained face features
  • the extraction model is obtained by processing the face image of the user to be identified, and the second face feature vector is obtained by using the encoder and the trained face feature extraction model to analyze the face of the specified user. The image is processed.
  • the user feature extraction model generation module 510 may be specifically used to generate a user feature extraction model for privacy protection composed of the encoder, the trained facial feature extraction model, and the user matching model.
  • the embodiment of this specification also provides a client device corresponding to the above method.
  • the client device may include: at least one processor; and a memory communicatively connected with the at least one processor; wherein the memory stores an image encoder and instructions executable by the at least one processor,
  • the image encoder is an encoder in a self-encoder, and the instructions are executed by the at least one processor, so that the at least one processor can: input the face image of the user to be recognized into the image encoder To obtain the encoding vector of the face image output by the image encoder, where the encoding vector is vector data obtained after characterizing the face image.
  • the server device uses the facial feature extraction model to generate the facial feature vector of the user to be identified according to the coding vector, and the facial feature extraction model
  • the model obtained by locking the decoder in the self-encoding and the feature extraction model based on the convolutional neural network.
  • the client device can use the encoder in the self-encoder on it to generate the encoding vector of the face image of the user to be recognized, so that the client device can send the waiting device to the server device. Recognize the encoding vector of the user's face image for user identification, without sending the face image of the user to be identified to the server device, avoiding the transmission of the face image of the user to be identified, to ensure the face information of the user to be identified Privacy and security.
  • the embodiment of this specification also provides a server device corresponding to the above method.
  • the server device may include: at least one processor; and a memory communicatively connected with the at least one processor; wherein the memory stores a facial feature extraction model, and the facial feature extraction model is obtained by The decoder in the encoding and the model obtained by locking based on the feature extraction model of the convolutional neural network, the memory also stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor.
  • the server device can generate the facial feature vector of the user to be recognized based on the facial feature extraction model carried on it based on the coding vector of the facial image of the user to be recognized, so that the server can be
  • the device does not need to obtain the face image of the user to be identified to be able to perform user identification, which not only avoids the transmission operation of the face image of the user to be identified, but also prevents the server device from storing and processing the face image of the user to be identified. Improve the privacy and security of the face information of the user to be identified.
  • the embodiment of this specification also provides a training device for the facial feature extraction model for privacy protection corresponding to the method in FIG. 3.
  • the device may include: at least one processor; and a memory communicatively connected with the at least one processor; wherein the memory stores instructions executable by the at least one processor, and the instructions are The at least one processor executes, so that the at least one processor can: obtain a first training sample set, and the training samples in the first training sample set are face images.
  • the initial autoencoder is trained by using the first training sample set to obtain the trained autoencoder.
  • the training samples in the second training sample set are coding vectors
  • the coding vectors are obtained by characterizing the face image using the encoder in the trained autoencoder The vector data.
  • the training samples in the second training sample set are input into the decoder of the initial facial feature extraction model, so that the reconstructed facial image data output by the decoder can be used to extract data from the initial facial feature extraction model.
  • the initial feature extraction model based on the convolutional neural network is trained to obtain a trained face feature extraction model; the initial face feature extraction model is obtained by locking the decoder and the initial feature extraction model ,
  • the decoder is a decoder in the self-encoder after training.
  • a user feature extraction model for privacy protection is generated.
  • the improvement of a technology can be clearly distinguished between hardware improvements (for example, improvements in circuit structures such as diodes, transistors, switches, etc.) or software improvements (improvements in method flow).
  • hardware improvements for example, improvements in circuit structures such as diodes, transistors, switches, etc.
  • software improvements improvements in method flow.
  • the improvement of many methods and processes of today can be regarded as a direct improvement of the hardware circuit structure.
  • Designers almost always get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by the hardware entity module.
  • a programmable logic device for example, a Field Programmable Gate Array (Field Programmable Gate Array, FPGA)
  • PLD Programmable Logic Device
  • FPGA Field Programmable Gate Array
  • HDL Hardware Description Language
  • ABEL Advanced Boolean Expression Language
  • AHDL Altera Hardware Description Language
  • HDCal JHDL
  • Lava Lava
  • Lola MyHDL
  • PALASM RHDL
  • VHDL Very-High-Speed Integrated Circuit Hardware Description Language
  • Verilog Verilog
  • the controller can be implemented in any suitable manner.
  • the controller can take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro)processor. , Logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers.
  • controllers include but are not limited to the following microcontrollers: ARC625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as a part of the control logic of the memory.
  • controllers in addition to implementing the controller in a purely computer-readable program code manner, it is entirely possible to program the method steps to make the controller use logic gates, switches, application-specific integrated circuits, programmable logic controllers, and embedded logic.
  • the same function can be realized in the form of a microcontroller or the like. Therefore, such a controller can be regarded as a hardware component, and the devices included in it for realizing various functions can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module for realizing the method and a structure within a hardware component.
  • a typical implementation device is a computer.
  • the computer may be, for example, a personal computer, a laptop computer, a cell phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or Any combination of these devices.
  • one or more embodiments of this specification can be provided as a method, a system, or a computer program product. Therefore, one or more embodiments of this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, one or more embodiments of this specification may adopt computer programs implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes. The form of the product.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, an embedded processor, or other programmable data processing equipment to produce a machine, so that the instructions executed by the processor of the computer or other programmable data processing equipment can be used to generate It is a device that realizes the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be stored in a computer-readable memory that can direct a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
  • processors CPUs
  • input/output interfaces network interfaces
  • memory volatile and non-volatile memory
  • the memory may include non-permanent memory in a computer readable medium, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
  • RAM random access memory
  • ROM read-only memory
  • flash RAM flash memory
  • Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
  • the information can be computer-readable instructions, data structures, program modules, or other data.
  • Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cartridges, magnetic tape storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
  • One or more embodiments of this specification may be described in the general context of computer-executable instructions executed by a computer, such as program modules.
  • program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
  • One or more embodiments of this specification can also be practiced in distributed computing environments. In these distributed computing environments, tasks are performed by remote processing devices connected through a communication network. In a distributed computing environment, program modules can be located in local and remote computer storage media including storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioethics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L'invention concerne un procédé, un appareil et un dispositif d'extraction de caractéristiques faciales pour la protection de la vie privée. Ledit procédé comprend les étapes consistant à : entrer une image faciale d'un utilisateur à identifier dans un codeur, pour obtenir un vecteur de codage de l'image faciale délivré en sortie par le codeur, le vecteur de codage étant des données vectorielles obtenues après la réalisation d'un traitement de caractéristiques sur l'image faciale (102) ; et après qu'un décodeur dans un modèle d'extraction de caractéristiques faciales reçoit le vecteur de codage, délivrer en sortie des données d'image faciale reconstruites à un modèle d'extraction de caractéristiques dans le modèle d'extraction de caractéristiques faciales, de telle sorte que, après que le modèle d'extraction de caractéristiques réalise un traitement de caractéristiques sur les données d'image faciale reconstruites, un vecteur de caractéristiques faciales de l'utilisateur à identifier est délivré en sortie (104).
PCT/CN2020/140574 2020-03-19 2020-12-29 Procédé, appareil et dispositif d'extraction de caractéristiques faciales WO2021184898A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010197694.8A CN111401272B (zh) 2020-03-19 2020-03-19 一种人脸特征提取方法、装置及设备
CN202010197694.8 2020-03-19

Publications (1)

Publication Number Publication Date
WO2021184898A1 true WO2021184898A1 (fr) 2021-09-23

Family

ID=71432637

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/140574 WO2021184898A1 (fr) 2020-03-19 2020-12-29 Procédé, appareil et dispositif d'extraction de caractéristiques faciales

Country Status (2)

Country Link
CN (2) CN111401272B (fr)
WO (1) WO2021184898A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821751A (zh) * 2022-06-27 2022-07-29 北京瑞莱智慧科技有限公司 图像识别方法、装置、系统及存储介质
CN114842544A (zh) * 2022-07-04 2022-08-02 江苏布罗信息技术有限公司 一种适用于面瘫患者的智能化人脸识别方法和系统
CN116844217A (zh) * 2023-08-30 2023-10-03 成都睿瞳科技有限责任公司 用于生成人脸数据的图像处理系统及方法

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111401272B (zh) * 2020-03-19 2021-08-24 支付宝(杭州)信息技术有限公司 一种人脸特征提取方法、装置及设备
CN111401273B (zh) * 2020-03-19 2022-04-29 支付宝(杭州)信息技术有限公司 一种用于隐私保护的用户特征提取系统及设备
CN111783965A (zh) * 2020-08-14 2020-10-16 支付宝(杭州)信息技术有限公司 用于生物特征识别的方法、装置、系统及电子设备
CN112016480B (zh) * 2020-08-31 2024-05-28 中移(杭州)信息技术有限公司 人脸特征表示方法、系统、电子设备和存储介质
CN112949545B (zh) * 2021-03-17 2022-12-30 中国工商银行股份有限公司 识别人脸图像的方法、装置、计算设备和介质
CN113657350A (zh) * 2021-05-12 2021-11-16 支付宝(杭州)信息技术有限公司 人脸图像处理方法及装置
CN113657498B (zh) * 2021-08-17 2023-02-10 展讯通信(上海)有限公司 生物特征提取方法、训练方法、认证方法、装置和设备
CN113946858B (zh) * 2021-12-20 2022-03-18 湖南丰汇银佳科技股份有限公司 一种基于数据隐私计算的身份安全认证方法及系统
CN115190217B (zh) * 2022-07-07 2024-03-26 国家计算机网络与信息安全管理中心 一种融合自编码网络的数据安全加密方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109769080A (zh) * 2018-12-06 2019-05-17 西北大学 一种基于深度学习的加密图像破解方法及系统
US20190171908A1 (en) * 2017-12-01 2019-06-06 The University Of Chicago Image Transformation with a Hybrid Autoencoder and Generative Adversarial Network Machine Learning Architecture
CN110598580A (zh) * 2019-08-25 2019-12-20 南京理工大学 一种人脸活体检测方法
CN111401272A (zh) * 2020-03-19 2020-07-10 支付宝(杭州)信息技术有限公司 一种人脸特征提取方法、装置及设备

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104866900B (zh) * 2015-01-29 2018-01-19 北京工业大学 一种反卷积神经网络训练方法
JP6318211B2 (ja) * 2016-10-03 2018-04-25 株式会社Preferred Networks データ圧縮装置、データ再現装置、データ圧縮方法、データ再現方法及びデータ転送方法
CN107220594B (zh) * 2017-05-08 2020-06-12 桂林电子科技大学 基于相似度保留堆叠自编码器的人脸姿态重建与识别方法
US11171977B2 (en) * 2018-02-19 2021-11-09 Nec Corporation Unsupervised spoofing detection from traffic data in mobile networks
CN108537120A (zh) * 2018-03-06 2018-09-14 安徽电科恒钛智能科技有限公司 一种基于深度学习的人脸识别方法及系统
CN108664967B (zh) * 2018-04-17 2020-08-25 上海媒智科技有限公司 一种多媒体页面视觉显著性预测方法及系统
EP3834129A1 (fr) * 2018-08-10 2021-06-16 Leidos Security Detection & Automation, Inc. Systèmes et procédés de traitement d'image
CN109117801A (zh) * 2018-08-20 2019-01-01 深圳壹账通智能科技有限公司 人脸识别的方法、装置、终端及计算机可读存储介质
CN109495476B (zh) * 2018-11-19 2020-11-20 中南大学 一种基于边缘计算的数据流差分隐私保护方法及系统
CN110147721B (zh) * 2019-04-11 2023-04-18 创新先进技术有限公司 一种三维人脸识别方法、模型训练方法和装置
CN110321777B (zh) * 2019-04-25 2023-03-28 重庆理工大学 一种基于栈式卷积稀疏去噪自编码器的人脸识别方法
CN110310351B (zh) * 2019-07-04 2023-07-21 北京信息科技大学 一种基于草图的三维人体骨骼动画自动生成方法
CN110766048A (zh) * 2019-09-18 2020-02-07 平安科技(深圳)有限公司 图像内容识别方法、装置、计算机设备和存储介质
CN110826056B (zh) * 2019-11-11 2024-01-30 南京工业大学 一种基于注意力卷积自编码器的推荐系统攻击检测方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190171908A1 (en) * 2017-12-01 2019-06-06 The University Of Chicago Image Transformation with a Hybrid Autoencoder and Generative Adversarial Network Machine Learning Architecture
CN109769080A (zh) * 2018-12-06 2019-05-17 西北大学 一种基于深度学习的加密图像破解方法及系统
CN110598580A (zh) * 2019-08-25 2019-12-20 南京理工大学 一种人脸活体检测方法
CN111401272A (zh) * 2020-03-19 2020-07-10 支付宝(杭州)信息技术有限公司 一种人脸特征提取方法、装置及设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIU JINGJING: "Image Encryption Based on Variational Auto-encoder Genertive Models", CHINESE MASTER'S THESES FULL-TEXT DATABASE, TIANJIN POLYTECHNIC UNIVERSITY, CN, 15 January 2019 (2019-01-15), CN, XP055852584, ISSN: 1674-0246 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114821751A (zh) * 2022-06-27 2022-07-29 北京瑞莱智慧科技有限公司 图像识别方法、装置、系统及存储介质
CN114842544A (zh) * 2022-07-04 2022-08-02 江苏布罗信息技术有限公司 一种适用于面瘫患者的智能化人脸识别方法和系统
CN114842544B (zh) * 2022-07-04 2022-09-06 江苏布罗信息技术有限公司 一种适用于面瘫患者的智能化人脸识别方法和系统
CN116844217A (zh) * 2023-08-30 2023-10-03 成都睿瞳科技有限责任公司 用于生成人脸数据的图像处理系统及方法
CN116844217B (zh) * 2023-08-30 2023-11-14 成都睿瞳科技有限责任公司 用于生成人脸数据的图像处理系统及方法

Also Published As

Publication number Publication date
CN111401272A (zh) 2020-07-10
CN111401272B (zh) 2021-08-24
CN113657352A (zh) 2021-11-16

Similar Documents

Publication Publication Date Title
WO2021184898A1 (fr) Procédé, appareil et dispositif d'extraction de caractéristiques faciales
WO2021184976A1 (fr) Système et dispositif d'extraction de caractéristiques d'utilisateur pour la protection de la vie privée
CN108509915B (zh) 人脸识别模型的生成方法和装置
WO2021238956A1 (fr) Procédé, appareil et dispositif de vérification d'identité basés sur la protection de la confidentialité
US20220172518A1 (en) Image recognition method and apparatus, computer-readable storage medium, and electronic device
US10984225B1 (en) Masked face recognition
CN112000940B (zh) 一种隐私保护下的用户识别方法、装置以及设备
CN111368795B (zh) 一种人脸特征提取方法、装置及设备
CN112084476A (zh) 生物识别身份验证方法、客户端、服务器、设备及系统
CN115359219A (zh) 虚拟世界的虚拟形象处理方法及装置
WO2020220212A1 (fr) Procédé de reconnaissance de caractéristique biologique et dispositif électronique
CN116994188A (zh) 一种动作识别方法、装置、电子设备及存储介质
CN111242105A (zh) 一种用户识别方法、装置及设备
Wang et al. Multi-format speech biohashing based on energy to zero ratio and improved lp-mmse parameter fusion
CN112395448A (zh) 一种人脸检索方法及装置
CN113239852B (zh) 一种基于隐私保护的隐私图像处理方法、装置及设备
CN115618375A (zh) 一种业务执行方法、装置、存储介质及电子设备
CN111860212B (zh) 人脸图像的超分方法、装置、设备及存储介质
CN115048661A (zh) 一种模型的处理方法、装置及设备
CN114662144A (zh) 一种生物检测方法、装置及设备
An et al. Verifiable speech retrieval algorithm based on KNN secure hashing
CN117874706B (zh) 一种多模态知识蒸馏学习方法及装置
CN117612269A (zh) 一种生物攻击检测方法、装置及设备
CN114882290A (zh) 一种认证方法、训练方法、装置及设备
CN116052287A (zh) 一种活体检测的方法、装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20926282

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20926282

Country of ref document: EP

Kind code of ref document: A1