CN112766197A - Face recognition method and device based on privacy protection - Google Patents

Face recognition method and device based on privacy protection Download PDF

Info

Publication number
CN112766197A
CN112766197A CN202110102508.2A CN202110102508A CN112766197A CN 112766197 A CN112766197 A CN 112766197A CN 202110102508 A CN202110102508 A CN 202110102508A CN 112766197 A CN112766197 A CN 112766197A
Authority
CN
China
Prior art keywords
face image
face
image
target
modal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110102508.2A
Other languages
Chinese (zh)
Other versions
CN112766197B (en
Inventor
曹佳炯
丁菁汀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202110102508.2A priority Critical patent/CN112766197B/en
Publication of CN112766197A publication Critical patent/CN112766197A/en
Application granted granted Critical
Publication of CN112766197B publication Critical patent/CN112766197B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • G06F21/6254Protecting personal data, e.g. for financial or medical purposes by anonymising data, e.g. decorrelating personal data from the owner's identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Bioethics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

One or more embodiments of the specification disclose a face recognition method and device based on privacy protection. The method comprises the following steps: the method comprises the steps of collecting multi-modal face images of a target user, wherein the multi-modal face images comprise a plane face image and a depth face image, and the plane face image comprises privacy information of the target user. And combining the plane face image and the depth face image according to a preset image combination mode to obtain a target face image corresponding to the target user, wherein the target face image comprises plane face feature information and depth face feature information. And based on the target face image, performing face recognition on the target user by using a pre-trained multi-modal face recognition model to obtain a face recognition result corresponding to the target user, wherein the multi-modal face recognition model is obtained by performing model training based on a plurality of target face image samples.

Description

Face recognition method and device based on privacy protection
Technical Field
The present disclosure relates to the field of privacy protection technologies, and in particular, to a face recognition method and apparatus based on privacy protection.
Background
In recent years, with the rise of deep learning, biometric recognition systems have been widely used. For example, face recognition systems have been introduced in many aspects of people's production and life, including access, payment, travel, and so on. Since the biometric identification system generally includes the steps of collecting, processing, uploading, storing and the like of the biometric information of the user, each step involves the private information (biometric information) of the user, the private information of the user is at risk of being revealed, and the biometric information of the user is easy to be stolen.
Therefore, it is desirable to provide a biometric method capable of protecting private information of a user.
Disclosure of Invention
In one aspect, one or more embodiments of the present specification provide a face recognition method based on privacy protection, including: the method comprises the steps of collecting multi-modal face images of a target user, wherein the multi-modal face images comprise a plane face image and a depth face image, and the plane face image comprises privacy information of the target user. And combining the plane face image and the depth face image according to a preset image combination mode to obtain a target face image corresponding to the target user, wherein the target face image comprises plane face feature information and depth face feature information. And based on the target face image, performing face recognition on the target user by using a pre-trained multi-modal face recognition model to obtain a face recognition result corresponding to the target user, wherein the multi-modal face recognition model is obtained by performing model training on a plurality of target face image samples, and the target face image samples are obtained by combining the multi-modal face image samples according to the image combination mode.
In another aspect, one or more embodiments of the present specification provide a face recognition apparatus based on privacy protection, including: the system comprises an acquisition module and a display module, wherein the acquisition module acquires a multi-modal face image of a target user, the multi-modal face image comprises a plane face image and a depth face image, and the plane face image comprises privacy information of the target user. And the first combination module combines the plane face image and the depth face image according to a preset image combination mode to obtain a target face image corresponding to the target user, wherein the target face image comprises plane face characteristic information and depth face characteristic information. And the face recognition module is used for carrying out face recognition on the target user by utilizing a pre-trained multi-modal face recognition model based on the target face image to obtain a face recognition result corresponding to the target user, the multi-modal face recognition model is obtained by carrying out model training on a plurality of target face image samples, and the target face image samples are obtained by combining the multi-modal face image samples according to the image combination mode.
In yet another aspect, one or more embodiments of the present specification provide a face recognition device based on privacy protection, including a processor and a memory electrically connected to the processor, the memory storing a computer program, the processor being configured to call and execute the computer program from the memory to implement: the method comprises the steps of collecting multi-modal face images of a target user, wherein the multi-modal face images comprise a plane face image and a depth face image, and the plane face image comprises privacy information of the target user. And combining the plane face image and the depth face image according to a preset image combination mode to obtain a target face image corresponding to the target user, wherein the target face image comprises plane face feature information and depth face feature information. And based on the target face image, performing face recognition on the target user by using a pre-trained multi-modal face recognition model to obtain a face recognition result corresponding to the target user, wherein the multi-modal face recognition model is obtained by performing model training on a plurality of target face image samples, and the target face image samples are obtained by combining the multi-modal face image samples according to the image combination mode.
In another aspect, the present specification provides a storage medium for storing a computer program, where the computer program is executable by a processor to implement the following processes: the method comprises the steps of collecting multi-modal face images of a target user, wherein the multi-modal face images comprise a plane face image and a depth face image, and the plane face image comprises privacy information of the target user. And combining the plane face image and the depth face image according to a preset image combination mode to obtain a target face image corresponding to the target user, wherein the target face image comprises plane face feature information and depth face feature information. And based on the target face image, performing face recognition on the target user by using a pre-trained multi-modal face recognition model to obtain a face recognition result corresponding to the target user, wherein the multi-modal face recognition model is obtained by performing model training on a plurality of target face image samples, and the target face image samples are obtained by combining the multi-modal face image samples according to the image combination mode.
Drawings
In order to more clearly illustrate one or more embodiments or technical solutions in the prior art in the present specification, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in one or more embodiments of the present specification, and other drawings can be obtained by those skilled in the art without inventive efforts.
FIG. 1 is a schematic flow chart diagram of a method for face recognition based on privacy protection according to an embodiment of the present description;
FIG. 2 is a schematic flow diagram of a method for training a multi-modal face recognition model in accordance with an embodiment of the present description;
FIG. 3 is a schematic swim lane diagram of a privacy preserving based face recognition method according to an embodiment of the present disclosure;
FIG. 4 is a schematic swimlane diagram of a privacy preserving based face recognition method according to another embodiment of the present description;
FIG. 5 is a schematic block diagram of a face recognition apparatus based on privacy protection according to an embodiment of the present specification;
fig. 6 is a schematic block diagram of a face recognition device based on privacy protection according to an embodiment of the present specification.
Detailed Description
One or more embodiments of the present disclosure provide a face recognition method and apparatus based on privacy protection, so as to solve the problem of low security of an existing face recognition method.
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in one or more embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from one or more of the embodiments of the present disclosure without making any creative effort shall fall within the protection scope of one or more of the embodiments of the present disclosure.
Fig. 1 is a schematic flow chart of a face recognition method based on privacy protection according to an embodiment of the present specification, as shown in fig. 1, the method includes:
s102, multi-modal face images of the target user are collected, wherein the multi-modal face images comprise a plane face image and a depth face image.
The planar face image includes privacy information of a target user, that is, biometric information (e.g., iris of human eye) included in a face. The planar face image comprises at least one of a near infrared face image (namely, an NIR face image), a color face image (namely, an RGB face image) and an ultrasonic face image. The depth face image can also be called a 3D face image, and can carry depth face feature information, namely depth information of a face.
In one embodiment, the acquired multi-modal facial image may be pre-processed, the pre-processing comprising: and detecting a privacy area containing biological information, namely a face area, in the collected multi-modal face image, and further removing images in other areas except the face area, so that only the face area for face recognition is reserved.
And S104, combining the plane face image and the depth face image according to a preset image combination mode to obtain a target face image corresponding to the target user.
The target face image obtained after combination includes planar face feature information and depth face feature information, that is, by combining the two images, the face feature images in the two images can be combined to be carried by the same image (i.e., the target face image).
And S106, based on the target face image, carrying out face recognition on the target user by using a pre-trained multi-mode face recognition model to obtain a face recognition result corresponding to the target user.
The multi-modal face recognition model is obtained by performing model training on a plurality of target face image samples, and the target face image samples are obtained by combining the multi-modal face image samples according to an image combination mode. The specific training mode of the multi-modal face recognition model will be described in detail in the following embodiments.
By adopting the technical scheme of one or more embodiments of the specification, the multi-modal face image of the target user is collected and combined into the target face image simultaneously comprising the plane face feature information and the depth face feature information, and then the target user is subjected to face recognition based on the target face image and by utilizing the pre-trained multi-modal face recognition model, so that the face recognition process of the target user can simultaneously depend on the plane face feature information and the depth face feature information, and the accuracy of face recognition is improved. And because the depth face image basically has no identification degree for a human visual system and has a very good privacy protection effect, the plane face image is combined into the depth face image, and the target face image obtained by combination has a strong privacy protection characteristic in vision, so that the condition that the user privacy is leaked in the face recognition process is avoided, and the safety of the user face recognition is improved.
The following first describes a specific training process of the multi-modal face recognition model.
In one embodiment, the multi-modal face recognition model may be trained according to steps S202-S208 shown in FIG. 2:
s202, obtaining multi-modal face image samples corresponding to a plurality of sample users respectively, wherein the multi-modal face image samples comprise a plane face image sample and a depth face image sample.
In one embodiment, the acquired multi-modal face image samples may be preprocessed, the preprocessing including: and detecting a privacy area containing biological information in the obtained multi-modal face image sample, namely a face area, and further removing images in other areas except the face area, so that only the face area is reserved as the multi-modal face image sample for subsequent model training.
And S204, desensitizing the planar face image sample to obtain a desensitized planar face image sample.
The desensitization processing method adopted for the plane face image sample can comprise at least one of the following steps: wavelet transformation of images, homomorphic encryption algorithm, row-column transformation algorithm and the like.
For example, assuming that the planar face image sample is an RGB face image with a resolution of H × W, the RGB face image sample may be subjected to triple wavelet transform to obtain an image sample after wavelet transform with a resolution of H × W.
For another example, assuming that the planar face image sample is represented in the form of an image matrix, random transformation may be performed on the designated row/column in the image matrix corresponding to the planar face image sample to obtain an image sample after row-column transformation.
And S206, combining the desensitized planar face image sample and the depth face image sample according to an image combination mode to obtain a target face image sample corresponding to the sample user.
Optionally, the image combination mode is that bitmap information corresponding to the planar face image sample is used to replace bitmap information on the designated number of bits corresponding to the depth face image sample, so as to obtain a target face image sample corresponding to the sample user. The designated bit number may be any position in bitmap information corresponding to the depth face image sample.
In order to enable the combined target face image sample to retain more effective information (namely the depth information of the face), the specified bit number can be selected as the bit number used for storing the image information with lower importance in the bitmap information, namely the bitmap information except the depth face characteristic information in the depth image face sample. For example, since the planar face image sample usually stores image information in an 8-bit (bit) bitmap, and the deep face image sample usually stores image information in a 16-bit bitmap, and important information (including depth information of a face) thereof is generally stored in the first 8 bits, the last 8 bits in the bitmap information corresponding to the deep face image sample can be selected as the designated number of bits. Based on the method, the bitmap information corresponding to the planar face image sample can be used for replacing the bitmap information of the last 8 bits corresponding to the depth face image sample, and therefore the target face image sample is obtained. Visually, the target face image sample is still a depth face image sample, but the corresponding bitmap information of the last 8 bits is replaced by the bitmap information corresponding to the desensitized planar face image sample.
And S208, performing model training by taking the target face image sample as input data and taking the face feature information of the sample user as output data to obtain a multi-modal face recognition model.
The face feature information of the sample user can be determined by a neural network used for model training. Optionally, the target face image sample is input into the neural network, the neural network splits bitmap information corresponding to the target face image sample according to an image splitting mode corresponding to the image combining mode to obtain bitmap information corresponding to the planar face image sample and bitmap information corresponding to the depth face image sample, and then face feature information in each bitmap information is respectively extracted to obtain planar face feature information and depth face feature information of the sample user. And determining the face feature information of the sample user according to the plane face feature information and the depth face feature information of the sample user, wherein the face feature information comprises the plane face feature information and the depth face feature information.
In the above embodiment, the image combination mode may be that bitmap information corresponding to the planar face image sample is used to replace bitmap information on the specified number of bits corresponding to the depth face image sample, and then the image splitting mode corresponding to the image combination mode is: and extracting bitmap information on the designated digit corresponding to the target face image sample as bitmap information corresponding to the plane face image sample, wherein the rest un-extracted bitmap information is the bitmap information corresponding to the depth face image sample.
In the embodiment, the multi-modal face recognition model is trained in advance based on the multi-modal face image samples corresponding to the sample users, so that the multi-modal face recognition model has the face recognition function according to the multi-modal face images, the face recognition effect based on the plane face feature information and the depth face feature information is realized, and compared with the conventional face recognition mode based on only single face feature information (such as plane face feature information or depth face feature information), the technical scheme further improves the accuracy of the face recognition result.
In one embodiment, after training the multi-modal face recognition model, the multi-modal face recognition model may be deployed in the cloud and/or the terminal, and the terminal may be any electronic device with a face recognition function, such as a mobile terminal, a tablet computer, a computer, and the like. The multi-modal face recognition model is deployed at the cloud and/or the terminal, so that the subsequent face recognition stage can be completed at the cloud and/or the terminal, and the multi-modal face recognition model can be selectively deployed at the cloud and/or the terminal based on the deployment mode required by the use scene, so that the use scene of the multi-modal face recognition model is more diversified.
The training and deployment process of the multi-modal face recognition model is described in detail above, and after the multi-modal face recognition model is successfully deployed, the multi-modal face recognition model can be used for face recognition.
In an embodiment, if the multi-modal face recognition model is deployed in the cloud, the terminal (i.e., the electronic device performing face recognition) may upload the target face image to the cloud when performing the above S106 (i.e., performing face recognition on the target user by using the pre-trained multi-modal face recognition model based on the target face image), and the cloud is configured to perform face recognition on the target user by using the multi-modal face recognition model according to the target face image and send a face recognition result corresponding to the target user to the terminal. And the terminal receives the face recognition result issued by the cloud and performs subsequent operation based on the face recognition result.
If the multi-modal face recognition model is deployed locally at the terminal, the terminal (i.e., the electronic device performing face recognition) can perform face recognition on the target user directly based on the target face image and by using the locally deployed multi-modal face recognition model when performing the above S106 (i.e., performing face recognition on the target user by using the pre-trained multi-modal face recognition model based on the target face image), so as to obtain a face recognition result.
Therefore, in the embodiment, the multi-modal face recognition model is deployed at the cloud and/or the terminal, so that the subsequent face recognition stage can be completed at the cloud and/or the terminal, and the multi-modal face recognition model can be selectively deployed at the cloud and/or the terminal based on the deployment mode required by the use scene, so that the use scene of the multi-modal face recognition model is more diversified.
In an embodiment, after acquiring the multi-modal face image of the target user (i.e., S102), desensitization processing may be performed on the planar face image according to a preset desensitization processing mode to obtain a desensitized planar face image, and then the desensitized planar face image and the depth face image are combined according to a preset image combination mode to obtain a target face image.
Wherein the preset desensitization treatment mode can comprise at least one of the following items: wavelet transformation of images, homomorphic encryption algorithm, row-column transformation algorithm and the like.
For example, if the planar face image is an RGB face image with a resolution of H × W, the RGB face image may be subjected to triple wavelet transform to obtain an image after wavelet transform with a resolution of H × W, and then the image after wavelet transform and the depth face image are combined to obtain the target face image.
For another example, assuming that the planar face image is represented in the form of an image matrix, the specified rows/columns in the image matrix corresponding to the planar face image may be randomly transformed to obtain an image after row-column transformation, and then the image after row-column transformation and the depth face image are combined to obtain the target face image.
In the embodiment, from the perspective of human vision, the desensitized planar face image is not displayed as a face image in most cases, and therefore, by combining the desensitized planar face image into the deep face image, the combined target face image has a very strong privacy protection characteristic visually, so that the condition that user privacy is leaked in the face recognition process is avoided, and the safety of the face recognition of the user is improved. Moreover, because the face recognition process depends on the planar face feature information and the deep face feature information, even if the planar face image is subjected to desensitization processing, the accuracy of the final face recognition result cannot be influenced by the reduction of the face recognition performance possibly caused by the desensitization processing.
In one embodiment, when the planar face image and the depth face image are combined according to a preset image combination mode, the first bitmap information corresponding to the planar face image can be used for replacing the second bitmap information corresponding to the depth face image and on the designated digit, so that the target face image is obtained.
The bitmap lengths corresponding to the first bitmap information and the second bitmap information are equal, and the designated number of bits can be any position in the bitmap information corresponding to the depth face image sample.
In order to enable the combined target face image to retain more effective information (namely, the depth information of the face), the specified bit number can be selected as the bit number used for storing the image information with lower importance in the bitmap information, namely, the bitmap information except the depth face characteristic information in the depth image face. For example, since a planar face image usually stores image information in an 8-bit (bit) bitmap, and a deep face image usually stores image information in a 16-bit bitmap, important information (including depth information of a face) of the planar face image is generally stored in the first 8 bits, the last 8 bits of the bitmap information corresponding to the deep face image can be selected as a designated bit number. Based on the above, the bitmap information corresponding to the planar face image can be used to replace the bitmap information of the last 8 bits corresponding to the deep face image, so as to obtain the target face image. Visually, the target face image is still a depth face image, but the corresponding bitmap information of the last 8 bits is replaced by the bitmap information corresponding to the desensitized planar face image.
In one embodiment, the preset image combination manner may also be: and carrying out weighted average on the plane face image and the depth face image according to a certain weight. For example, weights corresponding to the planar face image and the depth face image respectively may be preset, and when the planar face image and the depth face image are combined, the planar face image and the depth face image may be weighted and combined according to the weights corresponding to the planar face image and the depth face image respectively, so as to obtain a target face image.
For example, bitmap information of a planar face image is denoted as F1, and bitmap information of a depth face image is denoted as F2. Moreover, if the preset weight corresponding to the planar face image is a, and the preset weight corresponding to the depth face image is b, the target face image obtained after combining according to the image combination method of this embodiment can be represented as: f1 a + F2 b.
It should be noted that the image combination mode used in the face recognition process is consistent with the image combination mode used in training the multi-modal face recognition model, so that the accuracy of the face recognition result can be ensured.
In one embodiment, S106 is executed, that is, when a pre-trained multi-modal face recognition model is used to recognize a target face image, the third bitmap information corresponding to the target face image may be firstly split according to an image splitting manner corresponding to an image combining manner, so as to obtain fourth bitmap information corresponding to a planar face image of a target user and fifth bitmap information corresponding to a deep face image; then, extracting plane face feature information corresponding to a target user from the fourth bitmap information, and extracting depth face feature information corresponding to the target user from the fifth bitmap information; and then carrying out face recognition on the target user according to the extracted planar face feature information and the extracted depth face image feature information to obtain a face recognition result.
In this embodiment, if the planar face image is a desensitized planar face image, when extracting planar face feature information corresponding to a target user from the fourth bitmap information, performing a sensitive information restoring operation on the fourth bitmap information according to a sensitive information restoring mode corresponding to the desensitization processing mode, so as to obtain restored fourth bitmap information; and further extracting the planar face feature information corresponding to the target user from the restored fourth bitmap information.
Wherein, the desensitization treatment mode can comprise at least one of the following: wavelet transformation of images, homomorphic encryption algorithm, row-column transformation algorithm and the like. The sensitive information restoration mode corresponding to the desensitization treatment mode may include at least one of the following: the image inverse wavelet transform, homomorphic decryption algorithm, row-column inverse transform algorithm and the like.
For example, assuming that the fourth bitmap information obtained after splitting is an RGB image with a resolution of H × W subjected to wavelet transform, the RGB image may be subjected to inverse wavelet transform, so as to restore an RGB face image with a resolution of H × W.
As another example, assume that a planar face image is characterized in the form of an image matrix. The row-column transformation algorithm is to replace the bitmap information of the ith row and the jth row. The fourth bitmap information obtained after splitting is actually a planar image obtained after exchanging the bitmap information of the ith row and the jth row, so that the bitmap information of the ith row and the jth row corresponding to the planar image can be exchanged again, and the planar face image is restored.
Fig. 3 is a schematic swim lane diagram of a face recognition method based on privacy protection according to an embodiment of the present disclosure. In this embodiment, the planar face image is an RGB face image, and the depth face image is a 3D face image. As shown in fig. 3, the method is applied to the interaction between the user and the terminal, and includes the following steps:
and S3.1, the terminal acquires multi-modal face image samples respectively corresponding to a plurality of sample users, wherein the multi-modal face image samples comprise RGB face image samples and 3D face image samples.
And S3.2, desensitizing the RGB face image sample by the terminal, and recoding the desensitized RGB face image sample and the 3D face image sample to obtain a target face image sample.
The re-encoding is to combine the desensitized RGB face image samples and the 3D face image samples according to a preset image combination method, and the specific image combination method is described in detail in the above embodiments and is not described here again.
And S3.3, the terminal performs model training by taking the target face image sample as input data and taking the face characteristic information of the sample user as output data to obtain a multi-modal face recognition model.
And S3.4, the terminal deploys the multi-mode face recognition model in the local terminal.
And completing the training and deployment process of the multi-modal face recognition model through the S3.1-S3.4.
And S3.5, the user initiates a face recognition request to the terminal.
And S3.6, acquiring the multi-modal face image of the user by the terminal.
The multi-modal face image comprises an RGB face image and a 3D face image.
And S3.7, desensitizing the RGB face image by the terminal, and recoding the desensitized RGB face image and the 3D face image to obtain a target face image.
The re-encoding is to combine the desensitized RGB face image and the 3D face image according to a preset image combination method, and the specific image combination method is described in detail in the above embodiments, and is not described here again. The encoding scheme (i.e., the image combination scheme) for the re-encoding in S3.7 is identical to the encoding scheme for the re-encoding in S3.2.
And S3.8, extracting the face feature information in the target face image by the terminal, and carrying out face recognition on the user by using the locally deployed multi-mode face recognition model.
The face feature information in the target face image comprises RGB face feature information and 3D face feature information.
And S3.9, outputting the face recognition result to the user by the terminal.
In the embodiment, the pre-trained multi-mode face recognition model is used for carrying out face recognition on the user, so that the face recognition process of the user can depend on RGB face characteristic information and 3D face characteristic information at the same time, and the accuracy of face recognition is improved. Moreover, the 3D face image basically has no identification degree for a human visual system, and has a very good privacy protection effect, so that the RGB face image is combined into the 3D face image, and the target face image obtained by combination has a strong privacy protection characteristic visually, so that the condition that the user privacy is leaked in the face recognition process is avoided, and the safety of the user face recognition is improved. Moreover, this embodiment deploys the multimode face recognition model in the local terminal for the face recognition process can be accomplished in the local terminal, has reduced and has reached the interactive process between the high in the clouds, and then promotes face recognition's efficiency.
FIG. 4 is a schematic swim lane diagram of a face recognition method based on privacy protection according to another embodiment of the present disclosure. In this embodiment, the planar face image is an RGB face image, and the depth face image is a 3D face image. As shown in fig. 4, the method is applied to the interaction between the user, the terminal and the cloud, and includes the following steps:
and S4.1, the terminal acquires multi-modal face image samples corresponding to a plurality of sample users respectively, wherein the multi-modal face image samples comprise RGB face image samples and 3D face image samples.
And S4.2, training a multi-modal face recognition model by the terminal based on the multi-modal face image sample.
In this step, the training process of the multi-modal face recognition model is similar to the model training process (i.e., S3.2 to S3.4) in the embodiment shown in fig. 3, and details are not repeated in this embodiment.
And S4.3, deploying the multi-mode face recognition model in a cloud terminal by the terminal.
And S4.4, the user initiates a face recognition request to the terminal.
And S4.5, acquiring the multi-modal face image of the user by the terminal.
The multi-modal face image comprises an RGB face image and a 3D face image.
And S4.6, desensitizing the RGB face image by the terminal, and recoding the desensitized RGB face image and the 3D face image to obtain a target face image.
The re-encoding is to combine the desensitized RGB face image and the 3D face image according to a preset image combination method, and the specific image combination method is described in detail in the above embodiments, and is not described here again.
And S4.7, the terminal sends the target face image to a cloud.
And S4.8, extracting the face feature information in the target face image by the cloud, and carrying out face recognition on the user by using a multi-mode face recognition model deployed by the cloud.
The face feature information in the target face image comprises RGB face feature information and 3D face feature information.
And S4.9, the cloud sends the face recognition result to the terminal.
And S4.10, outputting the face recognition result to the user by the terminal.
Optionally, after the terminal acquires the multi-modal face image of the user, the multi-modal face image can be directly uploaded to the cloud, and the cloud acquires face image feature information corresponding to the user based on the multi-modal face image. Or the terminal acquires a multi-modal face image of the user, acquires face image characteristic information corresponding to the user based on the multi-modal face image, and uploads the face image characteristic information to the cloud. That is to say, in the face recognition process shown in fig. 4, the interaction process between the cloud and the terminal can be adjusted as needed, that is, which step in the interaction process is completed by the cloud or the terminal is selected, so that the face recognition efficiency is improved to the greatest extent.
In the embodiment, the pre-trained multi-mode face recognition model is used for carrying out face recognition on the user, so that the face recognition process of the user can depend on RGB face characteristic information and 3D face characteristic information at the same time, and the accuracy of face recognition is improved. Moreover, the 3D face image basically has no identification degree for a human visual system, and has a very good privacy protection effect, so that the RGB face image is combined into the 3D face image, and the target face image obtained by combination has a strong privacy protection characteristic visually, so that the condition that the user privacy is leaked in the face recognition process is avoided, and the safety of the user face recognition is improved. Furthermore, this embodiment deploys the multi-modal face recognition model in the high in the clouds for the face recognition process can be accomplished at the high in the clouds, because the data processing efficiency of high in the clouds is very fast, and can bear bigger data processing pressure, consequently accomplishes the face recognition process jointly through the interaction at high in the clouds and terminal, can promote face recognition efficiency to a certain extent, and alleviateed the data processing pressure at terminal.
It should be noted that the face recognition method based on privacy protection provided in one or more embodiments of the present disclosure is not limited to a face recognition scenario, but may also be applied to other biometric scenarios, such as a human body recognition scenario, a human iris recognition scenario, and the like, where the recognition methods based on privacy inclusion used in different biometric scenarios are similar, and the difference is only that the biometric features according to the recognition process are different.
In summary, particular embodiments of the present subject matter have been described. Other embodiments are within the scope of the following claims. In some cases, the actions recited in the claims can be performed in a different order and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may be advantageous.
Based on the same idea, the face recognition method based on privacy protection provided in one or more embodiments of the present specification further provides a face recognition apparatus based on privacy protection.
Fig. 5 is a schematic block diagram of a face recognition device based on privacy protection according to an embodiment of the present specification. As shown in fig. 5, the apparatus includes:
an acquisition module 510, which acquires a multi-modal face image of a target user; the multi-modal facial image comprises a plane facial image and a depth facial image; the plane face image comprises privacy information of the target user;
the first combination module 520 combines the planar face image and the depth face image according to a preset image combination mode to obtain a target face image corresponding to the target user; the target face image comprises plane face feature information and depth face feature information;
a face recognition module 530, configured to perform face recognition on the target user by using a pre-trained multi-modal face recognition model based on the target face image, so as to obtain a face recognition result corresponding to the target user; the multi-modal face recognition model is obtained by performing model training based on a plurality of target face image samples; and the target face image sample is obtained by combining the multi-modal face image samples according to the image combination mode.
In one embodiment, the apparatus further comprises:
the first desensitization module is used for performing desensitization treatment on the planar face image according to a preset desensitization treatment mode after the multi-modal face image of the target user is acquired, so that the desensitized planar face image is obtained;
the first assembling module 520 includes:
and the first combination unit is used for combining the desensitized plane face image and the depth face image according to a preset image combination mode to obtain the target face image.
In one embodiment, the first assembly module 520 includes:
the replacing unit is used for replacing second bitmap information on a specified digit corresponding to the depth face image by utilizing first bitmap information corresponding to the plane face image to obtain the target face image; and the bitmap lengths corresponding to the first bitmap information and the second bitmap information are equal.
In one embodiment, the second bitmap information over the specified number of bits comprises: and other bitmap information except the depth face characteristic information in the depth image face.
In one embodiment, the first assembly module 520 includes:
the determining unit is used for determining weights corresponding to the plane face image and the depth face image respectively;
and the second combination unit is used for carrying out weighted combination on the plane face image and the depth face image according to the weights respectively corresponding to the plane face image and the depth face image to obtain the target face image.
In one embodiment, the apparatus further comprises:
the acquisition module is used for acquiring the multi-modal face image samples corresponding to a plurality of sample users respectively before acquiring the multi-modal face image of the target user; the multi-modal face image samples comprise a plane face image sample and a depth face image sample;
the second desensitization module is used for performing desensitization treatment on the plane face image sample to obtain the desensitized plane face image sample;
the second combination module is used for combining the desensitized planar face image sample and the depth face image sample according to the image combination mode to obtain the target face image sample corresponding to the sample user;
and the model training module is used for performing model training by taking the target face image sample as input data and taking the face characteristic information of the sample user as output data to obtain the multi-modal face recognition model.
In one embodiment, the apparatus further comprises:
the deployment module is used for deploying the multi-modal face recognition model in a cloud and/or a terminal after model training is carried out by taking the target face image sample as input data and taking the face characteristic information of the sample user as output data to obtain the multi-modal face recognition model;
the face recognition module 530 includes:
the first recognition unit is used for carrying out face recognition on the target user by utilizing the multi-mode face recognition model deployed on the terminal based on the target face image; and/or the presence of a gas in the gas,
the second identification unit uploads the target face image to the cloud; the cloud end is used for carrying out face recognition on the target user according to the target face image and by utilizing the multi-mode face recognition model, and sending a face recognition result corresponding to the target user to the terminal; and receiving the face recognition result issued by the cloud.
In one embodiment, the face recognition module 530 includes:
the splitting unit is used for splitting third bitmap information corresponding to the target face image according to an image splitting mode corresponding to the image combination mode to obtain fourth bitmap information corresponding to the plane face image of the target user and fifth bitmap information corresponding to the depth face image;
the extracting unit is used for extracting the plane face feature information corresponding to the target user from the fourth bitmap information and extracting the depth face feature information corresponding to the target user from the fifth bitmap information;
and the third identification unit is used for carrying out face identification on the target user according to the plane face characteristic information and the depth face image characteristic information to obtain the face identification result.
In one embodiment, the planar face image is a desensitized planar face image;
the extraction unit performs sensitive information reduction operation on the fourth bitmap information according to a sensitive information reduction mode corresponding to the desensitization processing mode to obtain the reduced fourth bitmap information; and extracting the plane face feature information corresponding to the target user from the restored fourth bitmap information.
In one embodiment, the desensitization treatment comprises at least one of: wavelet transformation, homomorphic encryption algorithm and row-column transformation algorithm of the image.
In one embodiment, the planar face image comprises a near-infrared face image and/or a color face image.
By adopting the device of one or more embodiments of the present specification, the multi-modal face image of the target user is collected, including the planar face image and the deep face image, and the collected planar face image and the collected deep face image are combined into the target face image simultaneously including the planar face feature information and the deep face feature information, so that the target user is subjected to face recognition based on the target face image and by using the pre-trained multi-modal face recognition model, the face recognition process of the target user can depend on the planar face feature information and the deep face feature information at the same time, and the accuracy of the face recognition is improved. And because the depth face image basically has no identification degree for a human visual system and has a very good privacy protection effect, the plane face image is combined into the depth face image, and the target face image obtained by combination has a strong privacy protection characteristic in vision, so that the condition that the user privacy is leaked in the face recognition process is avoided, and the safety of the user face recognition is improved.
It should be understood by those skilled in the art that the above-mentioned face recognition apparatus based on privacy protection can be used to implement the above-mentioned face recognition method based on privacy protection, and the detailed description thereof should be similar to that of the above-mentioned method, and in order to avoid complexity, no further description is provided herein.
Based on the same idea, one or more embodiments of the present specification further provide a face recognition device based on privacy protection, as shown in fig. 6. Privacy preserving based face recognition devices may vary significantly depending on configuration or performance and may include one or more processors 601 and memory 602, where one or more stored applications or data may be stored in memory 602. Wherein the memory 602 may be transient or persistent storage. The application program stored in the memory 602 may include one or more modules (not shown), each of which may include a series of computer-executable instructions for a privacy-based face recognition device. Still further, the processor 601 may be configured to communicate with the memory 602 to execute a series of computer-executable instructions in the memory 602 on a privacy-based face recognition device. The privacy-preserving based face recognition apparatus may also include one or more power supplies 603, one or more wired or wireless network interfaces 604, one or more input-output interfaces 605, and one or more keyboards 606.
In particular, in this embodiment, the privacy-based face recognition apparatus includes a memory, and one or more programs, where the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the privacy-based face recognition apparatus, and the one or more programs configured to be executed by the one or more processors include computer-executable instructions for:
acquiring a multi-modal face image of a target user; the multi-modal facial image comprises a plane facial image and a depth facial image; the plane face image comprises privacy information of the target user;
combining the plane face image and the depth face image according to a preset image combination mode to obtain a target face image corresponding to the target user; the target face image comprises plane face feature information and depth face feature information;
based on the target face image, performing face recognition on the target user by using a pre-trained multi-mode face recognition model to obtain a face recognition result corresponding to the target user; the multi-modal face recognition model is obtained by performing model training based on a plurality of target face image samples; and the target face image sample is obtained by combining the multi-modal face image samples according to the image combination mode.
One or more embodiments of the present specification further provide a storage medium, where the storage medium stores one or more computer programs, where the one or more computer programs include instructions, and when the instructions are executed by an electronic device including multiple application programs, the electronic device can execute each process of the above-mentioned embodiment of the face recognition method based on privacy protection, and can achieve the same technical effect, and details are not described here to avoid repetition.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the various elements may be implemented in the same one or more software and/or hardware implementations in implementing one or more embodiments of the present description.
One skilled in the art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
One or more embodiments of the present specification are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only one or more embodiments of the present disclosure, and is not intended to limit the present disclosure. Various modifications and alterations to one or more embodiments described herein will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of one or more embodiments of the present specification should be included in the scope of claims of one or more embodiments of the present specification.

Claims (17)

1. A face recognition method based on privacy protection comprises the following steps:
acquiring a multi-modal face image of a target user; the multi-modal facial image comprises a plane facial image and a depth facial image; the plane face image comprises privacy information of the target user;
combining the plane face image and the depth face image according to a preset image combination mode to obtain a target face image corresponding to the target user; the target face image comprises plane face feature information and depth face feature information;
based on the target face image, performing face recognition on the target user by using a pre-trained multi-mode face recognition model to obtain a face recognition result corresponding to the target user; the multi-modal face recognition model is obtained by performing model training based on a plurality of target face image samples; and the target face image sample is obtained by combining the multi-modal face image samples according to the image combination mode.
2. The method of claim 1, after the acquiring the multi-modal facial images of the target user, further comprising:
desensitizing the planar face image according to a preset desensitizing treatment mode to obtain the desensitized planar face image;
the combining the planar face image and the depth face image according to a preset image combination mode to obtain a target face image corresponding to the target user comprises:
and combining the desensitized plane face image and the depth face image according to a preset image combination mode to obtain the target face image.
3. The method of claim 1, wherein the combining the planar face image and the deep face image according to a preset image combination manner to obtain a target face image corresponding to the target user comprises:
replacing second bitmap information on a designated digit corresponding to the depth face image by utilizing first bitmap information corresponding to the plane face image to obtain the target face image; and the bitmap lengths corresponding to the first bitmap information and the second bitmap information are equal.
4. The method of claim 3, the second bitmap information over the specified number of bits comprising: and other bitmap information except the depth face characteristic information in the depth image face.
5. The method of claim 1, wherein the combining the planar face image and the deep face image according to a preset image combination manner to obtain a target face image corresponding to the target user comprises:
determining weights corresponding to the plane face image and the depth face image respectively;
and according to the weights respectively corresponding to the plane face image and the depth face image, carrying out weighted combination on the plane face image and the depth face image to obtain the target face image.
6. The method of claim 1, prior to said acquiring multi-modal facial images of the target user, further comprising:
acquiring the multi-modal face image samples corresponding to a plurality of sample users respectively; the multi-modal face image samples comprise a plane face image sample and a depth face image sample;
desensitizing the planar face image sample to obtain a desensitized planar face image sample;
combining the desensitized planar face image sample and the depth face image sample according to the image combination mode to obtain the target face image sample corresponding to the sample user;
and performing model training by taking the target face image sample as input data and taking the face feature information of the sample user as output data to obtain the multi-modal face recognition model.
7. The method of claim 6, wherein after performing model training by using the target face image sample as input data and using the face feature information of the sample user as output data to obtain the multi-modal face recognition model, the method further comprises:
deploying the multi-modal face recognition model at a cloud and/or a terminal;
the method for recognizing the face of the target user by utilizing the pre-trained multi-modal face recognition model based on the target face image comprises the following steps:
based on the target face image, carrying out face recognition on the target user by utilizing the multi-mode face recognition model deployed on the terminal; and/or the presence of a gas in the gas,
uploading the target face image to the cloud; the cloud end is used for carrying out face recognition on the target user according to the target face image and by utilizing the multi-mode face recognition model, and sending a face recognition result corresponding to the target user to the terminal; and receiving the face recognition result issued by the cloud.
8. The method according to claim 2, wherein the recognizing the target face image by using the pre-trained multi-modal face recognition model to obtain the face recognition result corresponding to the target user comprises:
splitting third bitmap information corresponding to the target face image according to an image splitting mode corresponding to the image combination mode to obtain fourth bitmap information corresponding to the plane face image of the target user and fifth bitmap information corresponding to the depth face image;
extracting the plane face feature information corresponding to the target user from the fourth bitmap information, and extracting the depth face feature information corresponding to the target user from the fifth bitmap information;
and carrying out face recognition on the target user according to the plane face feature information and the depth face image feature information to obtain a face recognition result.
9. The method of claim 8, wherein the planar face image is a desensitized planar face image;
the extracting the planar face feature information corresponding to the target user from the fourth bitmap information includes:
performing sensitive information reduction operation on the fourth bitmap information according to a sensitive information reduction mode corresponding to the desensitization processing mode to obtain the reduced fourth bitmap information;
and extracting the plane face feature information corresponding to the target user from the restored fourth bitmap information.
10. The method of claim 2, the desensitization treatment regime comprising at least one of: wavelet transformation, homomorphic encryption algorithm and row-column transformation algorithm of the image.
11. The method of claim 1, the planar face image comprising a near-infrared face image and/or a color face image.
12. A privacy protection based face recognition apparatus, comprising:
the acquisition module is used for acquiring a multi-modal face image of a target user; the multi-modal facial image comprises a plane facial image and a depth facial image; the plane face image comprises privacy information of the target user;
the first combination module combines the plane face image and the depth face image according to a preset image combination mode to obtain a target face image corresponding to the target user; the target face image comprises plane face feature information and depth face feature information;
the face recognition module is used for carrying out face recognition on the target user by utilizing a pre-trained multi-mode face recognition model based on the target face image to obtain a face recognition result corresponding to the target user; the multi-modal face recognition model is obtained by performing model training based on a plurality of target face image samples; and the target face image sample is obtained by combining the multi-modal face image samples according to the image combination mode.
13. The apparatus of claim 12, further comprising:
the first desensitization module is used for performing desensitization treatment on the planar face image according to a preset desensitization treatment mode after the multi-modal face image of the target user is acquired, so that the desensitized planar face image is obtained;
the first assembling module includes:
and the first combination unit is used for combining the desensitized plane face image and the depth face image according to a preset image combination mode to obtain the target face image.
14. The apparatus of claim 12, the first assembly module comprising:
the replacing unit is used for replacing second bitmap information on a specified digit corresponding to the depth face image by utilizing first bitmap information corresponding to the plane face image to obtain the target face image; and the bitmap lengths corresponding to the first bitmap information and the second bitmap information are equal.
15. The apparatus of claim 14, further comprising:
the acquisition module is used for acquiring the multi-modal face image samples corresponding to a plurality of sample users respectively before acquiring the multi-modal face image of the target user; the multi-modal face image samples comprise a plane face image sample and a depth face image sample;
the second desensitization module is used for performing desensitization treatment on the plane face image sample to obtain the desensitized plane face image sample;
the second combination module is used for combining the desensitized planar face image sample and the depth face image sample according to the image combination mode to obtain the target face image sample corresponding to the sample user;
and the model training module is used for performing model training by taking the target face image sample as input data and taking the face characteristic information of the sample user as output data to obtain the multi-modal face recognition model.
16. A privacy protection based face recognition device comprising a processor and a memory electrically connected to the processor, the memory storing a computer program, the processor being configured to invoke and execute the computer program from the memory to implement:
acquiring a multi-modal face image of a target user; the multi-modal facial image comprises a plane facial image and a depth facial image; the plane face image comprises privacy information of the target user;
combining the plane face image and the depth face image according to a preset image combination mode to obtain a target face image corresponding to the target user; the target face image comprises plane face feature information and depth face feature information;
based on the target face image, performing face recognition on the target user by using a pre-trained multi-mode face recognition model to obtain a face recognition result corresponding to the target user; the multi-modal face recognition model is obtained by performing model training based on a plurality of target face image samples; and the target face image sample is obtained by combining the multi-modal face image samples according to the image combination mode.
17. A storage medium storing a computer program executable by a processor to implement the following:
acquiring a multi-modal face image of a target user; the multi-modal facial image comprises a plane facial image and a depth facial image; the plane face image comprises privacy information of the target user;
combining the plane face image and the depth face image according to a preset image combination mode to obtain a target face image corresponding to the target user; the target face image comprises plane face feature information and depth face feature information;
based on the target face image, performing face recognition on the target user by using a pre-trained multi-mode face recognition model to obtain a face recognition result corresponding to the target user; the multi-modal face recognition model is obtained by performing model training based on a plurality of target face image samples; and the target face image sample is obtained by combining the multi-modal face image samples according to the image combination mode.
CN202110102508.2A 2021-01-26 2021-01-26 Face recognition method and device based on privacy protection Active CN112766197B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110102508.2A CN112766197B (en) 2021-01-26 2021-01-26 Face recognition method and device based on privacy protection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110102508.2A CN112766197B (en) 2021-01-26 2021-01-26 Face recognition method and device based on privacy protection

Publications (2)

Publication Number Publication Date
CN112766197A true CN112766197A (en) 2021-05-07
CN112766197B CN112766197B (en) 2022-05-17

Family

ID=75705698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110102508.2A Active CN112766197B (en) 2021-01-26 2021-01-26 Face recognition method and device based on privacy protection

Country Status (1)

Country Link
CN (1) CN112766197B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160348A (en) * 2021-05-20 2021-07-23 深圳文达智通技术有限公司 Recoverable face image privacy protection method, device, equipment and storage medium
CN113222809A (en) * 2021-05-21 2021-08-06 支付宝(杭州)信息技术有限公司 Picture processing method and device for realizing privacy protection
CN115310122A (en) * 2022-07-13 2022-11-08 广州大学 Privacy parameter optimization method in multi-mode data fusion training

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2869239A2 (en) * 2013-11-04 2015-05-06 Facebook, Inc. Systems and methods for facial representation
CN108594997A (en) * 2018-04-16 2018-09-28 腾讯科技(深圳)有限公司 Gesture framework construction method, apparatus, equipment and storage medium
CN111194449A (en) * 2017-09-22 2020-05-22 高通股份有限公司 System and method for human face living body detection
CN111401331A (en) * 2020-04-27 2020-07-10 支付宝(杭州)信息技术有限公司 Face recognition method and device
CN111641798A (en) * 2020-06-15 2020-09-08 黑龙江科技大学 Video communication method and device
CN112214773A (en) * 2020-09-22 2021-01-12 支付宝(杭州)信息技术有限公司 Image processing method and device based on privacy protection and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2869239A2 (en) * 2013-11-04 2015-05-06 Facebook, Inc. Systems and methods for facial representation
CN111194449A (en) * 2017-09-22 2020-05-22 高通股份有限公司 System and method for human face living body detection
CN108594997A (en) * 2018-04-16 2018-09-28 腾讯科技(深圳)有限公司 Gesture framework construction method, apparatus, equipment and storage medium
CN111401331A (en) * 2020-04-27 2020-07-10 支付宝(杭州)信息技术有限公司 Face recognition method and device
CN111641798A (en) * 2020-06-15 2020-09-08 黑龙江科技大学 Video communication method and device
CN112214773A (en) * 2020-09-22 2021-01-12 支付宝(杭州)信息技术有限公司 Image processing method and device based on privacy protection and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160348A (en) * 2021-05-20 2021-07-23 深圳文达智通技术有限公司 Recoverable face image privacy protection method, device, equipment and storage medium
CN113222809A (en) * 2021-05-21 2021-08-06 支付宝(杭州)信息技术有限公司 Picture processing method and device for realizing privacy protection
CN115310122A (en) * 2022-07-13 2022-11-08 广州大学 Privacy parameter optimization method in multi-mode data fusion training

Also Published As

Publication number Publication date
CN112766197B (en) 2022-05-17

Similar Documents

Publication Publication Date Title
CN112766197B (en) Face recognition method and device based on privacy protection
CN109934197B (en) Training method and device for face recognition model and computer readable storage medium
CN108846355B (en) Image processing method, face recognition device and computer equipment
CN110490078B (en) Monitoring video processing method, device, computer equipment and storage medium
CN107330904B (en) Image processing method, image processing device, electronic equipment and storage medium
KR102388698B1 (en) Method for enrolling data in a base to protect said data
CN110378301B (en) Pedestrian re-identification method and system
RU2697646C1 (en) Method of biometric authentication of a user and a computing device implementing said method
CN112052834B (en) Face recognition method, device and equipment based on privacy protection
CN109190470B (en) Pedestrian re-identification method and device
CN105389489B (en) User authentication method and device based on electrocardiogram signal
CN112926559B (en) Face image processing method and device
CN112200796B (en) Image processing method, device and equipment based on privacy protection
CN111783146B (en) Image processing method and device based on privacy protection and electronic equipment
CN109766683B (en) Protection method for sensor fingerprint of mobile intelligent device
CN111401331B (en) Face recognition method and device
CN109416734B (en) Adaptive quantization method for iris image coding
US20210342967A1 (en) Method for securing image and electronic device performing same
Menon et al. Iris biometrics using deep convolutional networks
CN110910326B (en) Image processing method and device, processor, electronic equipment and storage medium
CN111160251B (en) Living body identification method and device
CN108875514B (en) Face authentication method and system, authentication device and nonvolatile storage medium
CN113343295B (en) Image processing method, device, equipment and storage medium based on privacy protection
CN110264544B (en) Picture processing method and device, storage medium and electronic device
CN109784157B (en) Image processing method, device and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant