WO2021184976A1 - 一种用于隐私保护的用户特征提取系统及设备 - Google Patents
一种用于隐私保护的用户特征提取系统及设备 Download PDFInfo
- Publication number
- WO2021184976A1 WO2021184976A1 PCT/CN2021/074246 CN2021074246W WO2021184976A1 WO 2021184976 A1 WO2021184976 A1 WO 2021184976A1 CN 2021074246 W CN2021074246 W CN 2021074246W WO 2021184976 A1 WO2021184976 A1 WO 2021184976A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- layer
- user
- vector
- face image
- encoder
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
- G06F21/6218—Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
- G06F21/6245—Protecting personal data, e.g. for financial or medical purposes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/50—Maintenance of biometric data or enrolment thereof
- G06V40/53—Measures to keep reference information secret, e.g. cancellable biometrics
Definitions
- One or more embodiments of this specification relate to the field of computer technology, and in particular to a user feature extraction system and device for privacy protection.
- one or more embodiments of this specification provide a user feature extraction method and device for privacy protection, which are used to extract the user's facial features on the basis of ensuring the privacy of the user's facial information.
- An embodiment of this specification provides a user feature extraction system for privacy protection, including: a first device and a second device; the first device is equipped with an encoder in an autoencoder, and the second device is equipped with a useful Face feature extraction model for privacy protection; the encoder is connected to the face feature extraction model, the input of the encoder is the face image of the user to be identified, and the output is the encoding vector of the face image, The encoding vector is vector data obtained after characterizing the face image; after receiving the encoding vector, the face feature extraction model outputs the face feature vector of the user to be identified.
- An embodiment of this specification provides a client device, the client device is equipped with an encoder in a self-encoder; the client device includes: at least one processor; and, communicatively connected with the at least one processor The memory; wherein the memory stores instructions executable by the at least one processor, the instructions are executed by the at least one processor, so that the at least one processor can: use the encoder to receive After the face image of the user to be identified, the encoding vector of the face image of the user to be identified is output; the encoding vector is the vector data obtained after characterizing the face image; and the encoding vector is sent to Server equipment.
- An embodiment of the present specification provides a server-end device, wherein the server-end device is equipped with an encoder in a self-encoder; the server-end device includes: at least one processor; and, communicatively connected with the at least one processor The memory; wherein the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can: acquire the to-be-identified collected by the terminal The face image of the user; after receiving the face image of the user to be identified by the encoder, the encoding vector of the face image of the user to be identified is output; the encoding vector is to characterize the face image Transforming the vector data obtained after the processing; sending the encoding vector to another server device.
- An embodiment of the present specification provides a server device, the server device is equipped with a facial feature extraction model for privacy protection; the server device includes at least one processor; and, communicates with the at least one processor Connected memory; wherein the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can: A coding vector of a face image, where the coding vector is vector data obtained after characterizing the face image by an encoder in an autoencoder; inputting the coding vector into the face feature extraction model, Obtain the face feature vector of the user to be identified output by the face feature extraction model.
- An embodiment of this specification achieves the following beneficial effects: because the encoding vector of the face image generated by the encoder in the autoencoder is transmitted, stored or used, the privacy and security of the user's face information will not be compromised. Sex has an impact. Therefore, the second device can obtain the encoding vector of the face image of the user to be identified from the first device, and generate the face feature vector of the user to be identified according to the encoding vector, without obtaining the face image of the user to be identified, Therefore, the user's facial feature vector can be extracted on the basis of ensuring the privacy and security of the user's facial information.
- Figure 1 is a schematic structural diagram of a user feature extraction system for privacy protection provided by an embodiment of this specification
- FIG. 2 is a schematic structural diagram of a model carried by a user feature extraction system provided by an embodiment of this specification;
- FIG. 3 is a schematic structural diagram of a client device provided by an embodiment of this specification.
- Figure 4 is a schematic structural diagram of a server device provided by an embodiment of the specification.
- Figure 5 is a schematic structural diagram of a server device provided by an embodiment of this specification.
- the user's face image is usually preprocessed before extracting the face feature vector.
- the principal component information is first extracted from the user's face picture, and part of the detailed information is discarded to generate facial features based on the principal component information vector.
- the face feature vector generated based on this method has the problem of loss of face feature information. It can be seen that the accuracy of the currently extracted face feature vector is also poor.
- FIG. 1 is a schematic structural diagram of a user feature extraction system for privacy protection provided by an embodiment of this specification.
- the user feature extraction system 101 for privacy protection may include: a first device 102 and a second device 103. There is a communication connection between the first device 102 and the second device 103.
- the first device 102 may be equipped with an encoder 104 in a self-encoder, and the second device may be equipped with a facial feature extraction model 105 for privacy protection.
- the encoder in the autoencoder is connected to the facial feature extraction model, the input of the encoder in the autoencoder is the face image of the user to be recognized, and the output is the A coding vector of the face image, where the coding vector is vector data obtained after characterizing the face image.
- the facial feature extraction model After receiving the coding vector, the facial feature extraction model outputs the facial feature vector of the user to be recognized.
- a user when a user uses various applications, he usually needs to register an account at each application.
- the user logs in or unlocks the registered account, or the user uses the registered account to make payments, it is usually necessary to perform user identification on the operating user of the registered account (that is, the user to be identified), and confirm that the user to be identified is the registered user. Only after the authenticated user (that is, the designated user) of the account, the user to be identified is allowed to perform subsequent operations.
- the to-be-passed user For the scenario where a user needs to pass through the access control system, it is usually necessary to identify the to-be-passed user, and after determining that the to-be-passed user (i.e., the user to be identified) is a whitelisted user (i.e., designated user) of the access control system, Only allow the user to pass through the access control system.
- the to-be-passed user i.e., the user to be identified
- a whitelisted user i.e., designated user
- the first device may be a client device (for example, a terminal device or an access control device equipped with a designated application, etc.), and the second device may be a server device corresponding to the client device (for example, the server of the designated application or the server of the access control device, etc.).
- client device for example, a terminal device or an access control device equipped with a designated application, etc.
- server device corresponding to the client device ( For example, the server of the designated application or the server of the access control device, etc.).
- the first device may also be another server device, and the first device may obtain the to-be-identified device from the client device.
- the second device may generate the facial feature vector of the user to be recognized according to the encoding vector of the facial image of the user to be recognized obtained from the first device, and perform user recognition operations.
- the country where the first device is located may be determined as the first country
- the country where the second device is located may be determined as the second country, or the registration country of the company to which the second device belongs may be the second country.
- the first country and the second country can be the same country or different countries.
- the second device can extract the facial feature vector of the user according to the encoding of the face image of the user to be identified sent by the first device, it is not necessary to obtain the face image of the user to be identified to ensure the Identify the privacy and security of the user's facial information.
- it can solve the problem that the first country does not allow the transmission of the user's face image domestically or internationally, but it also needs to identify the user to be identified.
- the face feature extraction model may be a model obtained by locking a decoder in the autoencoder and a feature extraction model based on a convolutional neural network.
- Fig. 2 is a schematic structural diagram of a model carried by a user feature extraction system provided by an embodiment of this specification.
- the encoder 104 of the self-encoder in the first device is connected to the decoder 201 of the self-encoder in the second device.
- the decoder 201 After receiving the code vector output by the encoder, the decoder 201 outputs Reconstruct the face image data.
- the feature extraction model 202 based on the convolutional neural network in the second device is connected to the decoder 201, and the feature extraction model based on the convolutional neural network receives the reconstructed face image data, and then outputs the waiting Recognize the user's facial feature vector.
- encryption software can be used to lock the decoder in the autoencoder and the feature extraction model based on the convolutional neural network, or the decoder in the autoencoder and the convolutional neural network-based feature extraction model can be locked.
- the feature extraction model is stored in the security hardware module of the device, so that the user cannot read the reconstructed face image data output by the decoder, thereby ensuring the privacy of the user's face information.
- there are many ways to achieve locking of the decoder in the autoencoder and the feature extraction model which is not specifically limited, and it only needs to ensure that the reconstructed face output by the decoder in the autoencoder is guaranteed.
- the security of the image data is sufficient.
- the service provider or other users after the service provider or other users obtain the read permission for the reconstructed face image data of the user to be identified, they can also obtain the output of the decoder in the facial feature extraction model based on the read permission. Reconstruction of face image data is beneficial to improve the utilization of data.
- the facial feature extraction model is The feature extraction model based on the convolutional neural network extracts the facial feature vector of the user to be identified from the reconstructed face image with better accuracy.
- the feature extraction model based on the convolutional neural network in the face feature extraction model is used to extract the face feature vector from the reconstructed face image
- the feature extraction model based on the convolutional neural network can use the existing Implementation of face recognition models based on convolutional neural networks, such as DeepFace, FaceNet, MTCNN, RetinaFace, etc. It can be seen that the compatibility of the face feature extraction model is better.
- the feature extraction model based on the convolutional neural network may include: an input layer, a convolution layer, a fully connected layer, and an output layer.
- the input layer is connected to the output of the decoder, the input layer is also connected to the convolutional layer, the convolutional layer is connected to the fully connected layer, and the fully connected layer is connected to the output layer .
- the input layer of the feature extraction model based on the convolutional neural network may be used to receive the reconstructed face image data output by the decoder.
- the convolutional layer may be used to perform local feature extraction on the reconstructed face image data to obtain the local feature vector of the face of the user to be recognized.
- the fully connected layer may be used to generate the face feature vector of the user to be identified according to the local feature vector of the face.
- the output layer may be used to generate a face classification result according to the face feature vector of the user to be recognized output by the fully connected layer.
- the facial feature vector of the user to be recognized may be an output vector of a fully connected layer adjacent to the output layer.
- the facial feature vector of the user to be recognized may also be the output of a fully connected layer separated from the output layer by N network layers. Vector; there is no specific restriction on this.
- the encoder in the first device and the decoder in the second device may constitute a self-encoder.
- the encoder may include: an input layer, a first hidden layer, and a bottleneck layer in a self-encoder;
- the decoder may include: a second hidden layer and an output layer in the self-encoder.
- the input layer of the encoder is connected to the first hidden layer
- the first hidden layer is connected to the bottleneck layer
- the bottleneck layer of the encoder is connected to the second hidden layer of the decoder
- the second hidden layer is connected to the output layer
- the output layer is connected to the input of the feature extraction model based on the convolutional neural network.
- the input layer may be used to receive the face image of the user to be identified.
- the first hidden layer may be used to perform encoding processing on the face image to obtain a first feature vector.
- the bottleneck layer may be used to perform dimensionality reduction processing on the first feature vector to obtain an encoding vector of the face image, where the number of dimensions of the encoding vector is less than the number of dimensions of the first feature vector;
- the second hidden layer may be used to decode the code vector to obtain a second feature vector.
- the output layer may be used to generate reconstructed face image data according to the second feature vector.
- the first The hidden layer and the second hidden layer may include multiple convolutional layers, and the first hidden layer and the second hidden layer may also include a pooling layer and a fully connected layer.
- the bottleneck layer (Bottleneck layer) can be used to reduce the feature dimension. The dimension of the feature vector output by the hidden layer connected to the bottleneck layer is higher than the dimension of the feature vector output by the bottleneck layer.
- the face feature extraction model carried in the second device may be a deep neural network model (Deep Neural Networks, DNN).
- DNN Deep Neural Networks
- the training goal of the autoencoder is to minimize the difference between the reconstructed face image and the original face image, it is not used to classify the user's face. Therefore, if the autoencoder is used
- the encoding vector of the face image of the user to be identified extracted by the encoder in the device is directly used as the face feature vector of the user to be identified for user identification, which will affect the accuracy of the user identification result.
- the deep neural network model can be used to classify scenes. Therefore, when the encoding vector of the face image of the user to be recognized is input to the deep neural network model, the output vector of the deep neural network model can be used as the facial features of the user to be identified vector. When performing user recognition based on the facial feature vector output by the deep neural network model, the accuracy of the user recognition result can be improved.
- the deep neural network model carried in the second device may be either a fully connected deep neural network model or a non-fully connected deep neural network model.
- the fully connected deep neural network model means that any neuron in the i-th layer in the model is connected to each neuron in the i+1th layer, while the non-fully-connected deep neural network model refers to the first neuron in the model. Any neuron in the i layer can be connected to some neurons in the i+1th layer.
- the fully connected deep neural network model can extract more facial feature information, but the calculation amount is also larger, which easily affects the calculation efficiency. Therefore, the deep neural network model carried in the second device can be determined according to actual needs.
- the fully connected deep neural network model may include an input layer, multiple fully connected layers, and an output layer; wherein, the input layer is connected to the output of the encoder, and the input layer is also It is connected to the fully connected layer, and the fully connected layer is connected to the output layer.
- the input layer may be used to receive the coding vector output by the encoder.
- the fully connected layer may be used to perform feature extraction on the coding vector to obtain the facial feature vector of the user to be recognized.
- the output layer may be used to generate a face classification result according to the face feature vector of the user to be recognized output by the fully connected layer.
- fully connected layers can function as a "classifier".
- the number of fully connected layers in the deep neural network model is directly proportional to the nonlinear expression ability of the model. Therefore, when the deep neural network model includes multiple fully connected layers, the accuracy of the facial features of the user to be recognized generated based on the deep neural network model can be improved.
- the facial feature vector of the user to be recognized can be either the output vector of the fully connected layer adjacent to the output layer, or the fully connected layer separated from the output layer by N network layers. The output vector of; there is no specific restriction on this.
- the first face feature extraction model is more versatile than the second face feature extraction model, and the accuracy of the extracted user face features is better.
- the model structure of the first face feature extraction model is more complicated than that of the second face feature extraction model, and the calculation time is longer. Therefore, the model structure of the facial feature extraction model can be selected according to actual needs.
- the facial feature vector of the user to be recognized generated by the facial feature extraction model in the second device can be used in the user recognition scene. Therefore, the second device may also be equipped with a user matching model; the user matching model is connected to the facial feature extraction model, and the user matching model receives the facial feature vector of the user to be identified and the specified user’s After the face feature vector, the vector distance between the face feature vector of the user to be identified and the face feature vector of the designated user can be used to generate output information indicating whether the user to be identified is the designated user Wherein, the facial feature vector of the designated user is obtained by processing the facial image of the designated user by using the encoder and the facial feature extraction model.
- the vector distance between the face feature vector of the user to be identified and the face feature vector of the specified user can be used to indicate the distance between the face feature vector of the user to be identified and the face feature vector of the specified user ⁇ similarity. Specifically, when the vector distance is less than or equal to the threshold, it can be determined that the user to be identified and the designated user are the same user. When the vector distance is greater than the threshold, it can be determined that the user to be identified and the designated user are different users.
- the threshold can be determined according to actual needs, and there is no specific limitation on this.
- FIG. 3 is a schematic structural diagram of a client device provided by an embodiment of this specification.
- the client device 300 may include: at least one processor 310;
- the instructions 320 executed by the device 310 are executed by the at least one processor 310, so that the at least one processor 310 can: use the encoder to receive the face image of the user to be identified, and then output the Identify the coding vector of the user's face image; the coding vector is vector data obtained after characterizing the face image; sending the coding vector to the server device.
- the client device can use the encoder in the self-encoder on it to generate the encoding vector of the face image of the user to be recognized, so that the client device can send the waiting device to the server device. Recognize the encoding vector of the user's face image for user identification, without sending the face image of the user to be identified to the server device, avoiding the transmission of the face image of the user to be identified, to ensure the face information of the user to be identified Privacy and security.
- the embodiments of this specification also provide some specific implementation solutions of the client device, which are described below.
- the encoder in the client device may include an input layer, a first hidden layer, and a bottleneck layer of the self-encoder; the input layer is connected to the first hidden layer, and the first hidden layer is connected to the The bottleneck layer is connected.
- the input layer may be used to receive the face image of the user to be identified.
- the first hidden layer may be used to perform encoding processing on the face image to obtain a first feature vector.
- the bottleneck layer may be used to perform dimensionality reduction processing on the first feature vector to obtain an encoding vector of the face image of the user to be identified, where the number of dimensions of the encoding vector is less than the dimension of the first feature vector quantity.
- the embodiment of this specification also provides a server-end device, and the server-end device is equipped with an encoder in a self-encoder.
- Fig. 4 is a schematic structural diagram of a server device provided by an embodiment of this specification.
- the device 400 may include: at least one processor 410; The instruction 420 is executed by the at least one processor 410, so that the at least one processor 410 can: the server device includes at least one processor; and, communicate with the at least one processor The memory; wherein the memory stores instructions that can be executed by the at least one processor, and the instructions are executed by the at least one processor, so that the at least one processor can: acquire the to-be-identified collected by the terminal The user's face image.
- the encoder After receiving the face image of the user to be identified by the encoder, output the encoding vector of the face image of the user to be identified; the encoding vector is a vector obtained after characterizing the face image data.
- the server device and the terminal device provided in FIG. 4 may be devices of the same country, and the server device and the other server device provided in FIG. 4 may be devices of different countries.
- the server device provided in Figure 4 can send the encoding vector of the face image of the user to be identified to another server device in another country , So that another server device can perform user identification based on the encoding vector of the face image of the user to be identified.
- the server device provided in Figure 4 does not need to send the face image of the user to be identified to another server device in other countries, the transmission of the face image of the user to be identified can be avoided, so as to realize user identification for privacy protection. plan. And because the encoder in the autoencoder is deployed in the server device provided in Figure 4, there is no need to modify the user's terminal device, which can reduce the cost of modifying the terminal device when implementing the user identification scheme for privacy protection. .
- the encoder carried in the server device provided in FIG. 4 may include: an input layer, a first hidden layer, and a bottleneck layer in the self-encoder; the input layer is connected to the first hidden layer, so The first hidden layer is connected to the bottleneck layer.
- the input layer in the self-encoder can be used to receive the face image of the user to be identified; the first hidden layer can be used to encode the face image to obtain the first feature Vector; the bottleneck layer can be used to perform dimensionality reduction processing on the first feature vector to obtain the encoding vector of the face image of the user to be identified, the number of dimensions of the encoding vector is smaller than the first feature vector The number of dimensions.
- FIG. 5 is a schematic structural diagram of a server device provided by an embodiment of this specification.
- the device 500 may include: at least one processor 510;
- the instruction 520 is executed by the at least one processor 510, so that the at least one processor 510 can: obtain the encoding vector of the face image of the user to be recognized, and the encoding vector is used in the self-encoder
- the vector data obtained after the encoder performs characterization processing on the face image.
- the coding vector is input to the face feature extraction model to obtain the face feature vector of the user to be identified output by the face feature extraction model.
- the server device can use the facial feature extraction model for privacy protection on the server to generate the facial feature vector of the user to be recognized based on the encoding vector of the facial image of the user to be recognized. , So that the server device can perform user recognition without acquiring the face image of the user to be identified, which not only avoids the transmission operation of the face image of the user to be identified, but also prevents the server device from storing and processing the user’s face to be identified. Face image to improve the privacy and security of the face information of the user to be identified.
- the acquiring the encoding vector of the face image of the user to be identified may specifically include: acquiring the encoding vector of the face image of the user to be identified from the client device, where the encoding vector is used by the client device The encoder mounted on it is obtained by processing the face image of the user to be identified collected by the client device.
- the server device is a server device in a first country
- the acquiring the encoding vector of the face image of the user to be identified may specifically include: acquiring the user's information from the server device in the second country.
- An encoding vector of the face image the encoding vector is obtained by processing the face image obtained from the client device by using the encoder mounted on the server device of the second country; and/ Or, the encoding vector is the encoding vector obtained by the server device of the second country from the client device, and the encoder vector is the encoding vector of the client device using the encoder carried on the client device.
- the face image collected by the device is processed.
- the facial feature extraction model carried by the server device provided in FIG. 5 may be a model obtained by locking the decoder in the autoencoder and the feature extraction model based on the convolutional neural network; the The decoder in the facial feature extraction model and the encoder that generates the encoding vector of the face image of the user to be identified can form a self-encoder.
- the inputting the coding vector into the face feature extraction model to obtain the face feature vector of the user to be identified output by the face feature extraction model may specifically include:
- the coding vector is input into the decoder of the facial feature extraction model, and the decoder outputs reconstructed facial image data.
- the feature extraction model based on convolutional neural network receives the reconstructed facial image data, Output the facial feature vector of the user to be recognized.
- the decoder in the autoencoder and the feature extraction model based on the convolutional neural network are locked, so that the user cannot read the reconstructed face image data output by the decoder, so as to ensure that the user Privacy of face information.
- there are many ways to achieve locking of the decoder in the autoencoder and the feature extraction model which is not specifically limited, and it only needs to ensure that the reconstructed face output by the decoder in the autoencoder is guaranteed. The security of the image data is sufficient.
- the feature extraction model based on the convolutional neural network may include: an input layer, a convolutional layer, a fully connected layer, and an output layer; wherein the input layer is connected to the output of the decoder, and the input The layer is also connected to the convolutional layer, the convolutional layer is connected to the fully connected layer, and the fully connected layer is connected to the output layer.
- the input layer may be used to receive reconstructed face image data output by the decoder.
- the convolutional layer may be used to perform local feature extraction on the reconstructed face image data to obtain the local feature vector of the face of the user to be recognized.
- the fully connected layer may be used to generate the face feature vector of the user to be identified according to the local feature vector of the face.
- the output layer may be used to generate a face classification result according to the face feature vector of the user to be recognized output by the fully connected layer;
- the facial feature vector of the user to be recognized can be either the output vector of the fully connected layer adjacent to the output layer, or the fully connected layer separated from the output layer by N network layers.
- the facial feature extraction model carried by the server device provided in FIG. 5 may also be a fully connected deep neural network model.
- the improvement of a technology can be clearly distinguished between hardware improvements (for example, improvements in circuit structures such as diodes, transistors, switches, etc.) or software improvements (improvements in method flow).
- hardware improvements for example, improvements in circuit structures such as diodes, transistors, switches, etc.
- software improvements improvements in method flow.
- the improvement of many methods and processes of today can be regarded as a direct improvement of the hardware circuit structure.
- Designers almost always get the corresponding hardware circuit structure by programming the improved method flow into the hardware circuit. Therefore, it cannot be said that the improvement of a method flow cannot be realized by the hardware entity module.
- a programmable logic device for example, a Field Programmable Gate Array (Field Programmable Gate Array, FPGA)
- PLD Programmable Logic Device
- FPGA Field Programmable Gate Array
- HDL Hardware Description Language
- ABEL Advanced Boolean Expression Language
- AHDL Altera Hardware Description Language
- HDCal JHDL
- Lava Lava
- Lola MyHDL
- PALASM RHDL
- VHDL Very-High-Speed Integrated Circuit Hardware Description Language
- Verilog Verilog
- the controller can be implemented in any suitable manner.
- the controller can take the form of, for example, a microprocessor or a processor and a computer-readable medium storing computer-readable program codes (such as software or firmware) executable by the (micro)processor. , Logic gates, switches, application specific integrated circuits (ASICs), programmable logic controllers and embedded microcontrollers. Examples of controllers include but are not limited to the following microcontrollers: ARC625D, Atmel AT91SAM, Microchip PIC18F26K20 and Silicon Labs C8051F320, the memory controller can also be implemented as part of the memory control logic.
- controllers in addition to implementing the controller in a purely computer-readable program code manner, it is entirely possible to program the method steps to make the controller use logic gates, switches, application-specific integrated circuits, programmable logic controllers, and embedded logic.
- the same function can be realized in the form of a microcontroller or the like. Therefore, such a controller can be regarded as a hardware component, and the devices included in it for realizing various functions can also be regarded as a structure within the hardware component. Or even, the device for realizing various functions can be regarded as both a software module for realizing the method and a structure within a hardware component.
- a typical implementation device is a computer.
- the computer may be, for example, a personal computer, a laptop computer, a cell phone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or Any combination of these devices.
- one or more embodiments of this specification can be provided as a method, a system, or a computer program product. Therefore, one or more embodiments of this specification may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, one or more embodiments of this specification may adopt computer programs implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes. The form of the product.
- computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
- These computer program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, an embedded processor, or other programmable data processing equipment to generate a machine, so that the instructions executed by the processor of the computer or other programmable data processing equipment are used to generate It is a device that realizes the functions in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
- These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
- the device implements the functions in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
- These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
- the instructions provide steps for implementing functions in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
- the computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
- processors CPUs
- input/output interfaces network interfaces
- memory volatile and non-volatile memory
- the memory may include non-permanent memory in a computer readable medium, random access memory (RAM) and/or non-volatile memory, such as read-only memory (ROM) or flash memory (flash RAM). Memory is an example of computer readable media.
- RAM random access memory
- ROM read-only memory
- flash RAM flash memory
- Computer-readable media include permanent and non-permanent, removable and non-removable media, and information storage can be realized by any method or technology.
- the information can be computer-readable instructions, data structures, program modules, or other data.
- Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disc (DVD) or other optical storage, Magnetic cartridges, magnetic tape storage or other magnetic storage devices or any other non-transmission media can be used to store information that can be accessed by computing devices. According to the definition in this article, computer-readable media does not include transitory media, such as modulated data signals and carrier waves.
- One or more embodiments of this specification may be described in the general context of computer-executable instructions executed by a computer, such as program modules.
- program modules include routines, programs, objects, components, data structures, etc. that perform specific tasks or implement specific abstract data types.
- One or more embodiments of this specification can also be practiced in distributed computing environments. In these distributed computing environments, tasks are performed by remote processing devices connected through a communication network. In a distributed computing environment, program modules can be located in local and remote computer storage media including storage devices.
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Bioethics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Molecular Biology (AREA)
- Biodiversity & Conservation Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Computer Hardware Design (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Collating Specific Patterns (AREA)
- Image Analysis (AREA)
Abstract
本说明书实施例公开了一种用于隐私保护的用户特征提取系统及设备。该方案包括:在第一设备中搭载自编码器中的编码器,在第二设备中搭载用于隐私保护的人脸特征提取模型;所述编码器与所述人脸特征提取模型连接,所述编码器的输入为待识别用户的人脸图像,输出为所述人脸图像的编码向量,所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据。所述人脸特征提取模型接收所述编码向量后,输出待识别用户的人脸特征向量。
Description
本说明书一个或多个实施例涉及计算机技术领域,尤其涉及一种用于隐私保护的用户特征提取系统及设备。
随着计算机技术以及光学成像技术的发展,基于人脸识别技术的用户识别方式正在日渐普及。目前,通常需要将客户端设备采集的待识别用户的人脸图像发送至服务端设备处,以便于该服务端设备从该待识别用户的人脸图像中提取出人脸特征向量,从而可以基于该人脸特征向量去生成用户识别结果。由于待识别用户的人脸图像属于用户敏感信息,因此,这种需将待识别用户的人脸图像发送至其他设备进行用户特征提取的方法存在泄漏用户敏感信息的风险。
基于此,如何在保证用户人脸信息的隐私性的基础上去提取用户人脸特征,已成为亟待解决的技术问题。
发明内容
有鉴于此,本说明书一个或多个实施例提供了一种用于隐私保护的用户特征提取方法及设备,用于在保证用户人脸信息的隐私性的基础上去提取用户人脸特征。
本说明书实施例提供的一种用于隐私保护的用户特征提取系统,包括:第一设备与第二设备;所述第一设备搭载有自编码器中的编码器,所述第二设备搭载有用于隐私保护的人脸特征提取模型;所述编码器与所述人脸特征提取模型连接,所述编码器的输入为待识别用户的人脸图像,输出为所述人脸图像的编码向量,所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据;所述人脸特征提取模型接收所述编码向量后,输出所述待识别用户的人脸特征向量。
本说明书实施例提供的一种客户端设备,所述客户端设备搭载有自编码器中的编码器;所述客户端设备包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:利用所述编码器接收待识别 用户的人脸图像后,输出所述待识别用户的人脸图像的编码向量;所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据;发送所述编码向量至服务端设备。
本说明书实施例提供的一种服务端设备,所述服务端设备搭载有自编码器中的编码器;所述服务端设备包括:至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:获取终端采集的待识别用户的人脸图像;利用所述编码器接收所述待识别用户的人脸图像后,输出所述待识别用户的人脸图像的编码向量;所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据;发送所述编码向量至另一服务端设备。
本说明书实施例提供的一种服务端设备,所述服务端设备搭载有用于隐私保护的人脸特征提取模型;所述服务端设备包括至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:获取待识别用户的人脸图像的编码向量,所述编码向量是利用自编码器中的编码器对所述人脸图像进行特征化处理后得到的向量数据;将所述编码向量输入所述人脸特征提取模型,得到所述人脸特征提取模型输出的所述待识别用户的人脸特征向量。
本说明书一个实施例实现了能够达到以下有益效果:由于对自编码器中的编码器生成的人脸图像的编码向量进行传输、存储或使用时,不会对用户人脸信息的隐私性及安全性产生影响。因此,第二设备可以从第一设备处获取待识别用户的人脸图像的编码向量,并根据该编码向量生成待识别用户的人脸特征向量,而无需去获取待识别用户的人脸图像,从而可以在保证用户人脸信息的隐私性及安全性的基础上去提取用户人脸特征向量。
此处所说明的附图用来提供对本说明书一个或多个实施例的进一步理解,构成本说明书一个或多个实施例的一部分,本说明书的示意性实施例及其说明用于解释本说明书一个或多个实施例,并不构成对本说明书一个或多个实施例的不当限定。在附图中:
图1为本说明书实施例提供的一种用于隐私保护的用户特征提取系统的结构示意图;
图2为本说明书实施例提供的一种用户特征提取系统所搭载的模型的结构示意图;
图3为本说明书实施例提供的一种客户端设备的结构示意图;
图4为本说明书实施例提供的一种服务端设备的结构示意图;
图5为本说明书实施例提供的一种服务端设备的结构示意图。
为使本说明书一个或多个实施例的目的、技术方案和优点更加清楚,下面将结合本说明书具体实施例及相应的附图对本说明书一个或多个实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本说明书的一部分实施例,而不是全部的实施例。基于本说明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本说明书一个或多个实施例保护的范围。
以下结合附图,详细说明本说明书各实施例提供的技术方案。
现有技术中,在基于人脸识别技术进行用户识别时,通常需要将待识别用户的人脸图像发送至服务器提供商,以令该服务提供商从待识别用户的人脸图像中提取出人脸特征向量,并基于该人脸特征向量进行用户识别。由于这种方法需令服务提供商获取、存储或处理用户的人脸图像,从而容易对用户的人脸信息的隐私性及安全性产生影响。
且目前,在从用户人脸图像中提取人脸特征向量时,通常会对用户人脸图像进行预处理以后再去提取人脸特征向量。例如,在基于主成分分析(principal component analysis,PCA)的人脸识别方法中,会先从用户人脸图片中提取主成分信息,并丢弃部分细节信息,以基于该主成分信息生成人脸特征向量。基于该方法生成的人脸特征向量存在人脸特征信息丢失的问题,可见,目前提取到的人脸特征向量的准确性也较差。
为了解决现有技术中的缺陷,本方案给出了以下实施例:
图1为本说明书实施例提供的一种用于隐私保护的用户特征提取系统的结构示意图。如图1所示,用于隐私保护的用户特征提取系统101可以包括:第一设备102与第二设备103。第一设备102与第二设备103之间通讯连接。第一设备102中可以搭载有自编码器中的编码器104,第二设备中可以搭载有用于隐私保护的人脸特征提取模型105。
在本说明书实施例中,所述自编码器中的编码器与所述人脸特征提取模型连接,所述自编码器中的编码器的输入为待识别用户的人脸图像,输出为所述人脸图像的编码 向量,所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据。所述人脸特征提取模型接收所述编码向量后,输出所述待识别用户的人脸特征向量。
在本说明书实施例中,用户在使用各类应用程序时,通常需在各个应用处注册账号。当用户对该注册账号进行登录、解锁,或者用户使用该注册账号进行支付等场景,通常需对该注册账号的操作用户(即待识别用户)进行用户识别,并在确定待识别用户为该注册账号的认证用户(即指定用户)后,才允许该待识别用户执行后续操作。或者,针对用户需通过门禁系统的这一场景,通常需对该待通过用户进行识别,并在确定该待通过用户(即待识别用户)为门禁系统的白名单用户(即指定用户)后,才允许该用户通过门禁系统。而当基于人脸识别技术对待识别用户进行用户识别时,通常需提取待识别用户的人脸特征向量。
在本说明书实施例中,所述第一设备可以为客户端设备(例如,搭载有指定应用的终端设备或门禁设备等),而第二设备可以为与该客户端设备对应的服务端设备(例如,指定应用的服务器或门禁设备的服务器等)。
或者,当客户端设备无法与该客户端设备对应的服务端设备进行直接通讯时,所述第一设备也可以为另一服务端设备,所述第一设备可以从客户端设备处获取待识别用户的人脸图像,并基于该第一设备中搭载的编码器去生成待识别用户的人脸图像的编码向量,以及将该编码向量发送至与客户端设备对应的服务端设备(即第二设备)处。第二设备则可以根据从第一设备处获取到的待识别用户的人脸图像的编码向量,生成待识别用户的人脸特征向量,并进行用户识别操作。
在本说明书实施例中,可以将第一设备所在国家确定为第一国家,而将第二设备所在国家确定为第二国家,或者,将第二设备所属公司的注册国家为第二国家。其中,第一国家与第二国家既可以是同一国家,也可以是不同国家。基于图1中的系统,由于第二设备可以根据第一设备发送的待识别用户的人脸图像的编码去提取用户人脸特征向量,而无需去获取待识别用户的人脸图像,以确保待识别用户的人脸信息的隐私性及安全性。同时,从可以解决第一国家不允许在国内或跨国传输用户人脸图像,但又需要对待识别用户进行用户识别的这一问题。
在本说明书实施例中,所述人脸特征提取模型的实现方式有多种。
第一种实现方式:所述人脸特征提取模型可以是通过对所述自编码器中的解码器及基于卷积神经网络的特征提取模型进行锁定而得到的模型。
图2为本说明书实施例提供的一种用户特征提取系统所搭载的模型的结构示意图。如图2所示:第一设备中的自编码器的编码器104与第二设备中的自编码器的解码器201连接,所述解码器201接收所述编码器输出的编码向量后,输出重建人脸图像数据。所述第二设备中的基于卷积神经网络的特征提取模型202与所述解码器201连接,所述基于卷积神经网络的特征提取模型接收所述重建人脸图像数据后,输出所述待识别用户的人脸特征向量。
在本说明书实施例中,可以使用加密软件对自编码器中的解码器及基于卷积神经网络的特征提取模型进行锁定,或者,可以将自编码器中的解码器及基于卷积神经网络的特征提取模型存储于设备的安全硬件模块内,以令用户无法读取该解码器输出的重建人脸图像数据,从而确保用户人脸信息的隐私性。在本说明书实施例中,对自编码器中的解码器及该特征提取模型进行锁定的实现方式有多种,对此不作具体限定,只需保证自编码器中的解码器输出的重建人脸图像数据的使用安全性即可。
在实际应用中,当服务提供商或其他用户取得了针对待识别用户的重建人脸图像数据的读取权限后,也可以基于该读取权限去获取人脸特征提取模型中的解码器输出的重建人脸图像数据,从而有利于提升数据的利用率。
在本说明书实施例中,由于该人脸特征提取模型中的解码器输出的重建人脸图像数据与待识别用户的人脸图像之间的相似度较高,从而使得该人脸特征提取模型中的基于卷积神经网络的特征提取模型从重建人脸图像中提取出的待识别用户的人脸特征向量的准确性较好。
且由于该人脸特征提取模型中的基于卷积神经网络的特征提取模型,用于从重建人脸图像中提取人脸特征向量,因此,该基于卷积神经网络的特征提取模型可以采用现有的基于卷积神经网络的人脸识别模型实现,例如,DeepFace、FaceNet、MTCNN、RetinaFace等。可见,该人脸特征提取模型的兼容性较好。
其中,所述基于卷积神经网络的特征提取模型可以包括:输入层、卷积层、全连接层及输出层。所述输入层与所述解码器的输出连接,所述输入层还与所述卷积层连接,所述卷积层与所述全连接层连接,所述全连接层与所述输出层连接。
所述基于卷积神经网络的特征提取模型的输入层,可以用于接收所述解码器输出的重建人脸图像数据。
所述卷积层,可以用于对所述重建人脸图像数据进行局部特征提取,得到所述待 识别用户的人脸局部特征向量。
所述全连接层,可以用于根据所述人脸局部特征向量,生成所述待识别用户的人脸特征向量。
所述输出层,可以用于根据所述全连接层输出的所述待识别用户的人脸特征向量,生成人脸分类结果。
在实际应用中,所述待识别用户的人脸特征向量可以为与所述输出层相邻的全连接层的输出向量。而当基于卷积神经网络的特征提取模型中的全连接层有多个时,所述待识别用户的人脸特征向量也可以为与所述输出层间隔N个网络层的全连接层的输出向量;对此不做具体限定。
在本说明书实施例中,第一设备中的编码器及第二设备中的解码器可以组成自编码器。其中,所述编码器可以包括:自编码器中的输入层、第一隐藏层及瓶颈层;所述解码器可以包括:自编码器中的第二隐藏层及输出层。
其中,所述编码器的输入层与所述第一隐藏层连接,所述第一隐藏层与所述瓶颈层连接,所述编码器的瓶颈层与所述解码器的第二隐藏层连接,所述第二隐藏层与所述输出层连接,所述输出层与所述基于卷积神经网络的特征提取模型的输入连接。
所述输入层,可以用于接收所述待识别用户的人脸图像。
所述第一隐藏层,可以用于对所述人脸图像进行编码处理,得到第一特征向量。
所述瓶颈层,可以用于对所述第一特征向量进行降维处理,得到所述人脸图像的编码向量,所述编码向量的维度数量小于所述第一特征向量的维度数量;
所述第二隐藏层,可以用于对所述编码向量进行解码处理,得到第二特征向量。
所述输出层,可以用于根据所述第二特征向量生成重建人脸图像数据。
本说明书实施例中,由于自编码器中的编码器需对图像进行编码处理,而自编码器中的解码器需生成重建人脸图像,因此,为保证编码效果及解码效果,所述第一隐藏层及所述第二隐藏层中可以包括多个卷积层,所述第一隐藏层及所述第二隐藏层中还可以包括池化层及全连接层。所述瓶颈层(Bottleneck layer)可以用于降低特征维度。与瓶颈层连接的隐藏层输出的特征向量的维度均高于该瓶颈层输出的特征向量的维度。
第二种实现方式:第二设备中搭载的人脸特征提取模型可以为深度神经网络模型(DeepNeuralNetworks,DNN)。
在本说明书实施例中,由于自编码器的训练目标为使得重建人脸图像与原使人脸图像之间的差异最小,而并非是用于对用户人脸进行分类,因此,若使用自编码器中的编码器提取的待识别用户的人脸图像的编码向量直接作为待识别用户的人脸特征向量去进行用户识别,会影响用户识别结果的准确性。
而深度神经网络模型可以用于分类场景,因此,当将待识别用户的人脸图像的编码向量输入该深度神经网络模型后,该深度神经网络模型的输出向量可以作为待识别用户的人脸特征向量。当基于该深度神经网络模型输出的人脸特征向量进行用户识别时,可以提升用户识别结果的准确性。
在本说明书实施例中,第二设备中搭载的深度神经网络模型既可以是全连接的深度神经网络模型,也可以是非全连接的深度神经网络模型。其中,全连接的深度神经网络模型是指模型中的第i层的任意一个神经元与第i+1层的各个神经元均相连,而非全连接的深度神经网络模型是指模型中的第i层的任意一个神经元可以与第i+1层中的部分神经元相连。全连接的深度神经网络模型相较于非全连接的深度神经网络模型的可以提取更多的人脸特征信息,但是计算量也较大,容易影响计算效率。因此,第二设备中搭载的深度神经网络模型可以根据实际需求去确定。
在本说明书实施例中,所述全连接的深度神经网络模型可以包括输入层、多个全连接层及输出层;其中,所述输入层与所述编码器的输出连接,所述输入层还与所述全连接层连接,所述全连接层与所述输出层连接。
所述输入层,可以用于接收所述编码器输出的所述编码向量。
所述全连接层,可以用于对所述编码向量进行特征提取,得到所述待识别用户的人脸特征向量。
所述输出层,可以用于根据所述全连接层输出的所述待识别用户的人脸特征向量,生成人脸分类结果。
本说明书实施例中,全连接层(fully connected layers,FC)可以起到“分类器”的作用。深度神经网络模型中的全连接层的层数与该模型的非线性表达能力成正比。因此,当深度神经网络模型包含多个全连接层时,可以提升基于该深度神经网络模型生成的待识别用户的人脸特征的准确性。在实际应用中,所述待识别用户的人脸特征向量既可以为与所述输出层相邻的全连接层的输出向量,也可以为与所述输出层间隔N个网络层的全连接层的输出向量;对此不做具体限定。
本说明书实施例中,第一种人脸特征提取模型相较于第二种人脸特征提取模型的通用性更好,且提取到的用户人脸特征准确性更好。但是第一种人脸特征提取模型相较于第二种人脸特征提取模型的模型结构更为复杂,计算耗时也较长。因此,可以根据实际需求去选用人脸特征提取模型的模型结构。
本说明书实施例中,第二设备中的人脸特征提取模型生成的待识别用户的人脸特征向量可以用于用户识别场景。因此,所述第二设备还可以搭载有用户匹配模型;所述用户匹配模型与所述人脸特征提取模型连接,所述用户匹配模型接收所述待识别用户的人脸特征向量及指定用户的人脸特征向量后,可以根据所述待识别用户的人脸特征向量和所述指定用户的人脸特征向量之间的向量距离,生成表示所述待识别用户是否为所述指定用户的输出信息,其中,所述指定用户的人脸特征向量是利用所述编码器及所述人脸特征提取模型对所述指定用户的人脸图像进行处理而得到的。
本说明书实施例中,待识别用户的人脸特征向量和指定用户的人脸特征向量之间的向量距离,可以用于表示待识别用户的人脸特征向量和指定用户的人脸特征向量之间的相似度。具体的,当该向量距离小于等于阈值时,可以确定待识别用户与指定用户为同一用户。而当该向量距离大于阈值时,可以确定待识别用户与指定用户为不同用户。该阈值可以根据实际需求确定,对此不做具体限定。
基于同样的思路,本说明书实施例还提供了一种客户端设备,所述客户端设备中搭载有自编码器中的编码器。图3为本说明书实施例提供的一种客户端设备的结构示意图。如图3所示,该客户端设备300可以包括:至少一个处理器310;以及,与所述至少一个处理器通信连接的存储器330;其中,所述存储器330存储有可被所述至少一个处理器310执行的指令320,所述指令被所述至少一个处理器310执行,以使所述至少一个处理器310能够:利用所述编码器接收待识别用户的人脸图像后,输出所述待识别用户的人脸图像的编码向量;所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据;发送所述编码向量至服务端设备。
在本说明书实施例中,通过令客户端设备可以利用其搭载的自编码器中的编码器去生成待识别用户的人脸图像的编码向量,从而令客户端设备可以向服务端设备发送该待识别用户的人脸图像的编码向量以进行用户识别,而无需向服务端设备发送待识别用户的人脸图像,避免了对待识别用户的人脸图像的传输,以保证待识别用户的人脸信息的隐私性及安全性。
基于图3中的客户端设备,本说明书实施例还提供了该客户端设备的一些具体实 施方案,下面进行说明。
可选的,所述客户端设备中的编码器可以包含自编码器的输入层、第一隐藏层及瓶颈层;所述输入层与所述第一隐藏层连接,所述第一隐藏层与所述瓶颈层连接。
所述输入层,可以用于接收所述待识别用户的人脸图像。
所述第一隐藏层,可以用于对所述人脸图像进行编码处理,得到第一特征向量。
所述瓶颈层,可以用于对所述第一特征向量进行降维处理,得到所述待识别用户的人脸图像的编码向量,所述编码向量的维度数量小于所述第一特征向量的维度数量。
基于同样的思路,本说明书实施例还提供了一种服务端设备,所述服务端设备中搭载有自编码器中的编码器。图4为本说明书实施例提供的一种服务端设备的结构示意图。如图4所示,设备400可以包括:至少一个处理器410;以及,与所述至少一个处理器通信连接的存储器430;其中,所述存储器430存储有可被所述至少一个处理器410执行的指令420,所述指令被所述至少一个处理器410执行,以使所述至少一个处理器410能够:所述服务端设备包括至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:获取终端采集的待识别用户的人脸图像。
利用所述编码器接收所述待识别用户的人脸图像后,输出所述待识别用户的人脸图像的编码向量;所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据。
发送所述编码向量至另一服务端设备。
在本说明书实施例中,图4中提供的服务端设备与终端设备可以为同一国家的设备,而图4中提供的服务端设备与所述另一服务端设备可以为不同国家的设备。当图4中提供的服务端设备所在国家不允许跨国传输用户人脸图像时,图4中提供的服务端设备可以向其他国家的另一服务端设备发送待识别用户的人脸图像的编码向量,从而令另一服务端设备可以基于该待识别用户的人脸图像的编码向量进行用户识别。
由于图4中提供的服务端设备无需向其他国家的另一服务端设备发送待识别用户的人脸图像,从而可以避免对待识别用户的人脸图像的传输,以实现用于隐私保护的用户识别方案。且由于自编码器中的编码器部署在图4中提供的服务端设备中,无需对用户的终端设备进行改造,从而可以减少实现用于隐私保护的用户识别方案时,对终端设备的改造成本。
基于图4中的服务端设备,本说明书实施例还提供了该服务端设备的一些具体实施方案,下面进行说明。
可选的,图4中提供的服务端设备中搭载的编码器可以包含:自编码器中的输入层、第一隐藏层及瓶颈层;所述输入层与所述第一隐藏层连接,所述第一隐藏层与所述瓶颈层连接。
其中,所述自编码器中的输入层,可以用于接收所述待识别用户的人脸图像;所述第一隐藏层,可以用于对所述人脸图像进行编码处理,得到第一特征向量;所述瓶颈层,可以用于对所述第一特征向量进行降维处理,得到所述待识别用户的人脸图像的编码向量,所述编码向量的维度数量小于所述第一特征向量的维度数量。
基于同样的思路,本说明书实施例还提供了一种服务端设备,所述服务端设备中搭载有用于隐私保护的人脸特征提取模型。图5为本说明书实施例提供的一种服务端设备的结构示意图。如图5所示,设备500可以包括:至少一个处理器510;以及,与所述至少一个处理器通信连接的存储器530;其中,所述存储器530存储有可被所述至少一个处理器510执行的指令520,所述指令被所述至少一个处理器510执行,以使所述至少一个处理器510能够:获取待识别用户的人脸图像的编码向量,所述编码向量是利用自编码器中的编码器对所述人脸图像进行特征化处理后得到的向量数据。
将所述编码向量输入所述人脸特征提取模型,得到所述人脸特征提取模型输出的所述待识别用户的人脸特征向量。
在本说明书实施例中,通过令服务端设备可以利用其搭载的用于隐私保护的人脸特征提取模型,去根据待识别用户的人脸图像的编码向量,生成待识别用户的人脸特征向量,从而令服务端设备无需去获取待识别用户的人脸图像即可以进行用户识别,不仅避免了对待识别用户的人脸图像的传输操作,还可以避免服务端设备存储、处理该待识别用户的人脸图像,以提升待识别用户的人脸信息的隐私性及安全性。
基于图5中的服务端设备,本说明书实施例还提供了该服务端设备的一些具体实施方案,下面进行说明。
可选的,所述获取待识别用户的人脸图像的编码向量,具体可以包括:从客户端设备处获取待识别用户的人脸图像的编码向量,所述编码向量是利用所述客户端设备上搭载的所述编码器对所述客户端设备采集的所述待识别用户的人脸图像进行处理而得到。
或者,当所述服务端设备为第一国家的服务端设备时;所述获取待识别用户的人脸图像的编码向量,具体可以包括:从第二国家的服务端设备处获取待识别用户的人脸图像的编码向量,所述编码向量为利用所述第二国家的服务端设备处搭载的所述编码器对从客户端设备处获取的所述人脸图像进行处理而得到的;和/或,所述编码向量为所述第二国家的服务端设备从客户端设备处获取的编码向量,所述编码器向量为利用所述客户端设备处搭载的所述编码器对所述客户端设备采集的所述人脸图像进行处理而得到的。
可选的,图5中提供的服务端设备所搭载的人脸特征提取模型,可以是通过对自编码器中的解码器及基于卷积神经网络的特征提取模型进行锁定而得到的模型;该人脸特征提取模型中的解码器与生成待识别用户的人脸图像的编码向量的编码器可以组成自编码器。
所述将所述编码向量输入所述人脸特征提取模型,得到所述人脸特征提取模型输出的所述待识别用户的人脸特征向量,具体可以包括:
将所述编码向量输入所述人脸特征提取模型的解码器中,所述解码器输出重建人脸图像数据,所述基于卷积神经网络的特征提取模型接收所述重建人脸图像数据后,输出所述待识别用户的人脸特征向量。
在本说明书实施例中,通过对自编码器中的解码器及基于卷积神经网络的特征提取模型进行锁定,以令用户无法读取该解码器输出的重建人脸图像数据,从而确保用户人脸信息的隐私性。在本说明书实施例中,对自编码器中的解码器及该特征提取模型进行锁定的实现方式有多种,对此不作具体限定,只需保证自编码器中的解码器输出的重建人脸图像数据的使用安全性即可。
可选的,所述基于卷积神经网络的特征提取模型可以包括:输入层、卷积层、全连接层及输出层;其中,所述输入层与所述解码器的输出连接,所述输入层还与所述卷积层连接,所述卷积层与所述全连接层连接;所述全连接层与所述输出层连接。
所述输入层,可以用于接收所述解码器输出的重建人脸图像数据。
所述卷积层,可以用于对所述重建人脸图像数据进行局部特征提取,得到所述待识别用户的人脸局部特征向量。
所述全连接层,可以用于根据所述人脸局部特征向量,生成所述待识别用户的人脸特征向量。
所述输出层,可以用于根据所述全连接层输出的所述待识别用户的人脸特征向量,生成人脸分类结果;
在实际应用中,所述待识别用户的人脸特征向量既可以为与所述输出层相邻的全连接层的输出向量,也可以为与所述输出层间隔N个网络层的全连接层的输出向量;对此不做具体限定。
可选的,图5中提供的服务端设备所搭载的人脸特征提取模型还可以为全连接的深度神经网络模型。
上述对本说明书特定实施例进行了描述。其它实施例在所附权利要求书的范围内。在一些情况下,在权利要求书中记载的动作或步骤可以按照不同于实施例中的顺序来执行并且仍然可以实现期望的结果。另外,在附图中描绘的过程不一定要求示出的特定顺序或者连续顺序才能实现期望的结果。在某些实施方式中,多任务处理和并行处理也是可以的或者可能是有利的。
在20世纪90年代,对于一个技术的改进可以很明显地区分是硬件上的改进(例如,对二极管、晶体管、开关等电路结构的改进)还是软件上的改进(对于方法流程的改进)。然而,随着技术的发展,当今的很多方法流程的改进已经可以视为硬件电路结构的直接改进。设计人员几乎都通过将改进的方法流程编程到硬件电路中来得到相应的硬件电路结构。因此,不能说一个方法流程的改进就不能用硬件实体模块来实现。例如,可编程逻辑器件(Programmable Logic Device,PLD)(例如现场可编程门阵列(Field Programmable Gate Array,FPGA))就是这样一种集成电路,其逻辑功能由用户对器件编程来确定。由设计人员自行编程来把一个数字系统“集成”在一片PLD上,而不需要请芯片制造厂商来设计和制作专用的集成电路芯片。而且,如今,取代手工地制作集成电路芯片,这种编程也多半改用“逻辑编译器(logic compiler)”软件来实现,它与程序开发撰写时所用的软件编译器相类似,而要编译之前的原始代码也得用特定的编程语言来撰写,此称之为硬件描述语言(Hardware Description Language,HDL),而HDL也并非仅有一种,而是有许多种,如ABEL(Advanced Boolean Expression Language)、AHDL(Altera Hardware Description Language)、Confluence、CUPL(Cornell University Programming Language)、HDCal、JHDL(Java Hardware Description Language)、Lava、Lola、MyHDL、PALASM、RHDL(Ruby Hardware Description Language)等,目前最普遍使用的是VHDL(Very-High-Speed Integrated Circuit Hardware Description Language)与Verilog。本领域技术人员也应该清楚,只需要将方法流程用上述几种硬件描述语言 稍作逻辑编程并编程到集成电路中,就可以很容易得到实现该逻辑方法流程的硬件电路。
控制器可以按任何适当的方式实现,例如,控制器可以采取例如微处理器或处理器以及存储可由该(微)处理器执行的计算机可读程序代码(例如软件或固件)的计算机可读介质、逻辑门、开关、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程逻辑控制器和嵌入微控制器的形式,控制器的例子包括但不限于以下微控制器:ARC 625D、Atmel AT91SAM、Microchip PIC18F26K20以及Silicone Labs C8051F320,存储器控制器还可以被实现为存储器的控制逻辑的一部分。本领域技术人员也知道,除了以纯计算机可读程序代码方式实现控制器以外,完全可以通过将方法步骤进行逻辑编程来使得控制器以逻辑门、开关、专用集成电路、可编程逻辑控制器和嵌入微控制器等的形式来实现相同功能。因此这种控制器可以被认为是一种硬件部件,而对其内包括的用于实现各种功能的装置也可以视为硬件部件内的结构。或者甚至,可以将用于实现各种功能的装置视为既可以是实现方法的软件模块又可以是硬件部件内的结构。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、膝上型计算机、蜂窝电话、相机电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
为了描述的方便,描述以上装置时以功能分为各种单元分别描述。当然,在实施本说明书一个或多个实施例时可以把各单元的功能在同一个或多个软件和/或硬件中实现。
本领域内的技术人员应明白,本说明书一个或多个实施例可提供为方法、系统、或计算机程序产品。因此,本说明书一个或多个实施例可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本说明书一个或多个实施例可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
本说明书一个或多个实施例是参照根据本说明书一个或多个实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他 可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中的功能的步骤。
在一个典型的配置中,计算设备包括一个或多个处理器(CPU)、输入/输出接口、网络接口和内存。
内存可能包括计算机可读介质中的非永久性存储器,随机存取存储器(RAM)和/或非易失性内存等形式,如只读存储器(ROM)或闪存(flash RAM)。内存是计算机可读介质的示例。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带式磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本说明书一个或多个实施例可以在由计算机执行的计算机可执行指令的一般上下文中描述,例如程序模块。一般地,程序模块包括执行特定任务或实现特定抽象数据类型的例程、程序、对象、组件、数据结构等等。也可以在分布式计算环境中实践本说明书一个或多个实施例,在这些分布式计算环境中,由通过通信网络而被连接的远程处理设备来执行任务。在分布式计算环境中,程序模块可以位于包括存储设备在内的本地和远程计算机存储介质中。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本说明书的实施例而已,并不用于限制本说明书一个或多个实施例。对于本领域技术人员来说,本说明书一个或多个实施例可以有各种更改和变化。凡在本说明书一个或多个实施例的精神和原理之内所作的任何修改、等同替换、改进等,均应包含在本说明书一个或多个实施例的权利要求范围之内。
Claims (23)
- 一种用于隐私保护的用户特征提取系统,包括:第一设备与第二设备;所述第一设备搭载有自编码器中的编码器,所述第二设备搭载有用于隐私保护的人脸特征提取模型;所述编码器与所述人脸特征提取模型连接,所述编码器的输入为待识别用户的人脸图像,输出为所述人脸图像的编码向量,所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据;所述人脸特征提取模型接收所述编码向量后,输出所述待识别用户的人脸特征向量。
- 如权利要求1所述的系统,所述人脸特征提取模型是通过对所述自编码器中的解码器及基于卷积神经网络的特征提取模型进行锁定而得到的模型;其中,所述解码器与所述编码器连接,所述解码器接收所述编码器输出的编码向量后,输出重建人脸图像数据;所述基于卷积神经网络的特征提取模型与所述解码器连接,所述基于卷积神经网络的特征提取模型接收所述重建人脸图像数据后,输出所述待识别用户的人脸特征向量。
- 如权利要求2所述的系统,所述基于卷积神经网络的特征提取模型包括:输入层、卷积层及全连接层;其中,所述输入层与所述解码器的输出连接,所述输入层还与所述卷积层连接,所述卷积层与所述全连接层连接;所述输入层,用于接收所述解码器输出的重建人脸图像数据;所述卷积层,用于对所述重建人脸图像数据进行局部特征提取,得到所述待识别用户的人脸局部特征向量;所述全连接层,用于根据所述人脸局部特征向量,生成所述待识别用户的人脸特征向量。
- 如权利要求3所述的系统,所述基于卷积神经网络的特征提取模型还包括输出层;所述输出层与所述全连接层连接;所述输出层,用于根据所述全连接层输出的所述待识别用户的人脸特征向量,生成人脸分类结果;所述待识别用户的人脸特征向量为与所述输出层相邻的全连接层的输出向量。
- 如权利要求2所述的系统,所述编码器包括:输入层、第一隐藏层及瓶颈层;所述解码器包括:第二隐藏层及输出层;其中,所述编码器的输入层与所述第一隐藏层连接,所述第一隐藏层与所述瓶颈层连接,所述编码器的瓶颈层与所述解码器的第二隐藏层连接,所述第二隐藏层与所述输 出层连接,所述输出层与所述基于卷积神经网络的特征提取模型的输入连接;所述输入层,用于接收所述待识别用户的人脸图像;所述第一隐藏层,用于对所述人脸图像进行编码处理,得到第一特征向量;所述瓶颈层,用于对所述第一特征向量进行降维处理,得到所述人脸图像的编码向量,所述编码向量的维度数量小于所述第一特征向量的维度数量;所述第二隐藏层,用于对所述编码向量进行解码处理,得到第二特征向量;所述输出层,用于根据所述第二特征向量生成重建人脸图像数据。
- 如权利要求1所述的系统,所述人脸特征提取模型为全连接的深度神经网络模型;所述全连接的深度神经网络模型包括输入层及多个全连接层;其中,所述输入层与所述编码器的输出连接,所述输入层还与所述全连接层连接;所述输入层,用于接收所述编码器输出的所述编码向量;所述全连接层,用于对所述编码向量进行特征提取,得到所述待识别用户的人脸特征向量。
- 如权利要求6所述的系统,所述全连接的深度神经网络模型还包括输出层,所述输出层与所述全连接层连接;所述输出层,用于根据所述全连接层输出的所述待识别用户的人脸特征向量,生成人脸分类结果;其中,所述待识别用户的人脸特征向量为与所述输出层相邻的全连接层的输出向量。
- 如权利要求1所述的系统,所述第二设备还搭载有用户匹配模型;所述用户匹配模型与所述人脸特征提取模型连接,所述用户匹配模型接收所述待识别用户的人脸特征向量及指定用户的人脸特征向量后,根据所述待识别用户的人脸特征向量和所述指定用户的人脸特征向量之间的向量距离,生成表示所述待识别用户是否为所述指定用户的输出信息,其中,所述指定用户的人脸特征向量是利用所述编码器及所述人脸特征提取模型对所述指定用户的人脸图像进行处理而得到的。
- 如权利要求1-8中任意一项所述的系统,所述第一设备为客户端设备,所述第二设备为服务端设备;其中,所述第一设备还包括图像采集装置,所述图像采集装置用于采集所述待识别用户的人脸图像。
- 如权利要求9所述的系统,所述第一设备为第一国家的客户端设备,所述第二设备为第二国家的服务端设备。
- 如权利要求1-8中任意一项所述的系统,所述第一设备为第一国家的服务端设 备,所述第二设备为第二国家的服务端设备。
- 一种客户端设备,所述客户端设备搭载有自编码器中的编码器;所述客户端设备包括至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:利用所述编码器接收待识别用户的人脸图像后,输出所述待识别用户的人脸图像的编码向量;所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据;发送所述编码向量至服务端设备。
- 如权利要求12所述的客户端设备,所述编码器包含所述自编码器的输入层、第一隐藏层及瓶颈层;所述输入层与所述第一隐藏层连接,所述第一隐藏层与所述瓶颈层连接;所述输入层,用于接收所述待识别用户的人脸图像;所述第一隐藏层,用于对所述人脸图像进行编码处理,得到第一特征向量;所述瓶颈层,用于对所述第一特征向量进行降维处理,得到所述待识别用户的人脸图像的编码向量,所述编码向量的维度数量小于所述第一特征向量的维度数量。
- 一种服务端设备,所述服务端设备搭载有自编码器中的编码器;所述服务端设备包括至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:获取终端采集的待识别用户的人脸图像;利用所述编码器接收所述待识别用户的人脸图像后,输出所述待识别用户的人脸图像的编码向量;所述编码向量为对所述人脸图像进行特征化处理后得到的向量数据;发送所述编码向量至另一服务端设备。
- 如权利要求14所述的服务端设备,所述编码器包含所述自编码器的输入层、第一隐藏层及瓶颈层;所述输入层与所述第一隐藏层连接,所述第一隐藏层与所述瓶颈层连接;所述输入层,用于接收所述待识别用户的人脸图像;所述第一隐藏层,用于对所述人脸图像进行编码处理,得到第一特征向量;所述瓶颈层,用于对所述第一特征向量进行降维处理,得到所述待识别用户的人脸 图像的编码向量,所述编码向量的维度数量小于所述第一特征向量的维度数量。
- 如权利要求14所述的服务端设备,所述服务端设备与所述另一服务端设备为不同国家的服务端设备。
- 一种服务端设备,所述服务端设备搭载有用于隐私保护的人脸特征提取模型;所述服务端设备包括至少一个处理器;以及,与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够:获取待识别用户的人脸图像的编码向量,所述编码向量是利用自编码器中的编码器对所述人脸图像进行特征化处理后得到的向量数据;将所述编码向量输入所述人脸特征提取模型,得到所述人脸特征提取模型输出的所述待识别用户的人脸特征向量。
- 如权利要求17所述的服务端设备,所述人脸特征提取模型是通过对所述自编码器中的解码器及基于卷积神经网络的特征提取模型进行锁定而得到的模型;所述将所述编码向量输入所述人脸特征提取模型,得到所述人脸特征提取模型输出的所述待识别用户的人脸特征向量,具体包括:将所述编码向量输入所述人脸特征提取模型的解码器中,所述解码器输出重建人脸图像数据,所述基于卷积神经网络的特征提取模型接收所述重建人脸图像数据后,输出所述待识别用户的人脸特征向量。
- 如权利要求18所述的服务端设备,所述基于卷积神经网络的特征提取模型包括:输入层、卷积层及全连接层;其中,所述输入层与所述解码器的输出连接,所述输入层还与所述卷积层连接,所述卷积层与所述全连接层连接;所述输入层,用于接收所述解码器输出的重建人脸图像数据;所述卷积层,用于对所述重建人脸图像数据进行局部特征提取,得到所述待识别用户的人脸局部特征向量;所述全连接层,用于根据所述人脸局部特征向量,生成所述待识别用户的人脸特征向量。
- 如权利要求19所述的服务端设备,所述基于卷积神经网络的特征提取模型还包括输出层;所述输出层与所述全连接层连接;所述输出层,用于根据所述全连接层输出的所述待识别用户的人脸特征向量,生成人脸分类结果;所述待识别用户的人脸特征向量为与所述输出层相邻的全连接层的输出向量。
- 如权利要求17所述的服务端设备,所述人脸特征提取模型为全连接的深度神经网络模型。
- 如权利要求17所述的服务端设备,所述获取待识别用户的人脸图像的编码向量,具体包括:从客户端设备处获取待识别用户的人脸图像的编码向量,所述编码向量是利用所述客户端设备上搭载的所述编码器对所述客户端设备采集的所述待识别用户的人脸图像进行处理而得到。
- 如权利要求17所述的服务端设备,所述服务端设备为第一国家的服务端设备;所述获取待识别用户的人脸图像的编码向量,具体包括:从第二国家的服务端设备处获取待识别用户的人脸图像的编码向量,所述编码向量为利用所述第二国家的服务端设备处搭载的所述编码器对从客户端设备处获取的所述人脸图像进行处理而得到的;和/或,所述编码向量为所述第二国家的服务端设备从客户端设备处获取的编码向量,所述编码器向量为利用所述客户端设备处搭载的所述编码器对所述客户端设备采集的所述人脸图像进行处理而得到的。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010198335.4A CN111401273B (zh) | 2020-03-19 | 2020-03-19 | 一种用于隐私保护的用户特征提取系统及设备 |
CN202010198335.4 | 2020-03-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021184976A1 true WO2021184976A1 (zh) | 2021-09-23 |
Family
ID=71428977
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/074246 WO2021184976A1 (zh) | 2020-03-19 | 2021-01-28 | 一种用于隐私保护的用户特征提取系统及设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111401273B (zh) |
WO (1) | WO2021184976A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114866345A (zh) * | 2022-07-05 | 2022-08-05 | 支付宝(杭州)信息技术有限公司 | 一种生物识别的处理方法、装置及设备 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111401273B (zh) * | 2020-03-19 | 2022-04-29 | 支付宝(杭州)信息技术有限公司 | 一种用于隐私保护的用户特征提取系统及设备 |
CN111970509B (zh) * | 2020-08-10 | 2022-12-23 | 杭州海康威视数字技术股份有限公司 | 一种视频图像的处理方法、装置与系统 |
CN112699408B (zh) * | 2020-12-31 | 2024-06-21 | 重庆大学 | 一种基于自编码器的穿戴设备数据隐私保护方法 |
CN113807253A (zh) * | 2021-09-17 | 2021-12-17 | 上海商汤智能科技有限公司 | 人脸识别方法及装置、电子设备和存储介质 |
CN113935462A (zh) * | 2021-09-29 | 2022-01-14 | 光大科技有限公司 | 一种基于堆栈自动编码器的联邦学习方法、装置及系统 |
CN114598874B (zh) * | 2022-01-20 | 2022-12-06 | 中国科学院自动化研究所 | 视频量化编解码方法、装置、设备及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011221847A (ja) * | 2010-04-12 | 2011-11-04 | Sharp Corp | 画像形成装置およびそれを備えてなる文書管理システム |
CN108491785A (zh) * | 2018-03-19 | 2018-09-04 | 网御安全技术(深圳)有限公司 | 一种人工智能图像辨识攻击防御系统 |
CN108512651A (zh) * | 2018-03-19 | 2018-09-07 | 网御安全技术(深圳)有限公司 | 一种人工智能图像辨识攻击防御方法、系统及存储介质 |
CN110750801A (zh) * | 2019-10-11 | 2020-02-04 | 矩阵元技术(深圳)有限公司 | 数据处理方法、装置、计算机设备和存储介质 |
CN111368795A (zh) * | 2020-03-19 | 2020-07-03 | 支付宝(杭州)信息技术有限公司 | 一种人脸特征提取方法、装置及设备 |
CN111401272A (zh) * | 2020-03-19 | 2020-07-10 | 支付宝(杭州)信息技术有限公司 | 一种人脸特征提取方法、装置及设备 |
CN111401273A (zh) * | 2020-03-19 | 2020-07-10 | 支付宝(杭州)信息技术有限公司 | 一种用于隐私保护的用户特征提取系统及设备 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101610408A (zh) * | 2008-06-16 | 2009-12-23 | 北京智安邦科技有限公司 | 视频保护置乱方法和结构 |
CN102880870B (zh) * | 2012-08-31 | 2016-05-11 | 电子科技大学 | 人脸特征的提取方法及系统 |
CN104268531A (zh) * | 2014-09-30 | 2015-01-07 | 江苏中佑石油机械科技有限责任公司 | 人脸特征数据获取系统 |
CN105975931B (zh) * | 2016-05-04 | 2019-06-14 | 浙江大学 | 一种基于多尺度池化的卷积神经网络人脸识别方法 |
CN107196765B (zh) * | 2017-07-19 | 2019-08-02 | 武汉大学 | 一种强化隐私保护的远程生物特征身份认证方法 |
US11163269B2 (en) * | 2017-09-11 | 2021-11-02 | International Business Machines Corporation | Adaptive control of negative learning for limited reconstruction capability auto encoder |
CN108446680B (zh) * | 2018-05-07 | 2021-12-21 | 西安电子科技大学 | 一种基于边缘计算的人脸认证系统中的隐私保护方法及系统 |
CN109753921A (zh) * | 2018-12-29 | 2019-05-14 | 上海交通大学 | 一种人脸特征向量隐私保护识别方法 |
CN110133610B (zh) * | 2019-05-14 | 2020-12-15 | 浙江大学 | 基于时变距离-多普勒图的超宽带雷达动作识别方法 |
CN110139147B (zh) * | 2019-05-20 | 2021-11-19 | 深圳先进技术研究院 | 一种视频处理方法、系统、移动终端、服务器及存储介质 |
-
2020
- 2020-03-19 CN CN202010198335.4A patent/CN111401273B/zh active Active
-
2021
- 2021-01-28 WO PCT/CN2021/074246 patent/WO2021184976A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011221847A (ja) * | 2010-04-12 | 2011-11-04 | Sharp Corp | 画像形成装置およびそれを備えてなる文書管理システム |
CN108491785A (zh) * | 2018-03-19 | 2018-09-04 | 网御安全技术(深圳)有限公司 | 一种人工智能图像辨识攻击防御系统 |
CN108512651A (zh) * | 2018-03-19 | 2018-09-07 | 网御安全技术(深圳)有限公司 | 一种人工智能图像辨识攻击防御方法、系统及存储介质 |
CN110750801A (zh) * | 2019-10-11 | 2020-02-04 | 矩阵元技术(深圳)有限公司 | 数据处理方法、装置、计算机设备和存储介质 |
CN111368795A (zh) * | 2020-03-19 | 2020-07-03 | 支付宝(杭州)信息技术有限公司 | 一种人脸特征提取方法、装置及设备 |
CN111401272A (zh) * | 2020-03-19 | 2020-07-10 | 支付宝(杭州)信息技术有限公司 | 一种人脸特征提取方法、装置及设备 |
CN111401273A (zh) * | 2020-03-19 | 2020-07-10 | 支付宝(杭州)信息技术有限公司 | 一种用于隐私保护的用户特征提取系统及设备 |
Non-Patent Citations (1)
Title |
---|
FANG YAN TAN JI SHU [DIALECT TALKING TECHNOLOGY]: "How to Lock Your Hard Drive and Protect Your Privacy by Using Windows BitLocker)", 16 September 2015 (2015-09-16), CN, XP009530873, Retrieved from the Internet <URL:Baidu Jingyan,:<URL:https://jingyan.baidu.com/article/6d704a132fd06c28db51ca89.html> * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114866345A (zh) * | 2022-07-05 | 2022-08-05 | 支付宝(杭州)信息技术有限公司 | 一种生物识别的处理方法、装置及设备 |
CN114866345B (zh) * | 2022-07-05 | 2022-12-09 | 支付宝(杭州)信息技术有限公司 | 一种生物识别的处理方法、装置及设备 |
Also Published As
Publication number | Publication date |
---|---|
CN111401273B (zh) | 2022-04-29 |
CN111401273A (zh) | 2020-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021184898A1 (zh) | 一种人脸特征提取方法、装置及设备 | |
WO2021184976A1 (zh) | 一种用于隐私保护的用户特征提取系统及设备 | |
Ning et al. | Multi‐view frontal face image generation: a survey | |
WO2021238956A1 (zh) | 基于隐私保护的身份核验方法、装置及设备 | |
TWI753271B (zh) | 資源轉移方法、裝置及系統 | |
US10984225B1 (en) | Masked face recognition | |
TWI743425B (zh) | 資訊識別方法、伺服器、客戶端及系統 | |
TWI736765B (zh) | 圖像處理方法、裝置、設備及儲存媒體 | |
CN115359219B (zh) | 虚拟世界的虚拟形象处理方法及装置 | |
TW202211060A (zh) | 隱私保護下的用戶識別方法、裝置及設備 | |
CN111368795B (zh) | 一种人脸特征提取方法、装置及设备 | |
WO2020164331A1 (zh) | 理赔业务的处理方法及装置 | |
CN117612269A (zh) | 一种生物攻击检测方法、装置及设备 | |
CN113221717B (zh) | 一种基于隐私保护的模型构建方法、装置及设备 | |
WO2024060909A1 (zh) | 识别表情的方法、装置、设备及介质 | |
CN116630480B (zh) | 一种交互式文本驱动图像编辑的方法、装置和电子设备 | |
CN116071804A (zh) | 人脸识别的方法、装置和电子设备 | |
CN113239852B (zh) | 一种基于隐私保护的隐私图像处理方法、装置及设备 | |
CN115358777A (zh) | 虚拟世界的广告投放处理方法及装置 | |
CN115048661A (zh) | 一种模型的处理方法、装置及设备 | |
CN115577336A (zh) | 一种生物识别处理方法、装置及设备 | |
CN118211132B (zh) | 一种基于点云的三维人体表面数据生成方法及装置 | |
CN115905913B (zh) | 数字藏品的检测方法及装置 | |
CN117994470B (zh) | 一种多模态层次自适应的数字网格重建方法及装置 | |
CN117874706B (zh) | 一种多模态知识蒸馏学习方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21772545 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21772545 Country of ref document: EP Kind code of ref document: A1 |