CN111539008B - Image processing method and device for protecting privacy - Google Patents

Image processing method and device for protecting privacy Download PDF

Info

Publication number
CN111539008B
CN111539008B CN202010442259.7A CN202010442259A CN111539008B CN 111539008 B CN111539008 B CN 111539008B CN 202010442259 A CN202010442259 A CN 202010442259A CN 111539008 B CN111539008 B CN 111539008B
Authority
CN
China
Prior art keywords
image
images
encrypted
biometric
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010442259.7A
Other languages
Chinese (zh)
Other versions
CN111539008A (en
Inventor
张昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ANT Financial Hang Zhou Network Technology Co Ltd
Original Assignee
ANT Financial Hang Zhou Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ANT Financial Hang Zhou Network Technology Co Ltd filed Critical ANT Financial Hang Zhou Network Technology Co Ltd
Priority to CN202010442259.7A priority Critical patent/CN111539008B/en
Publication of CN111539008A publication Critical patent/CN111539008A/en
Application granted granted Critical
Publication of CN111539008B publication Critical patent/CN111539008B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/602Providing cryptographic facilities or services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities

Abstract

The embodiment of the specification provides an image processing method and device for protecting privacy and a model training method and device for protecting privacy. One embodiment of the image processing method includes: acquiring a mixed image corresponding to a biological characteristic image to be encrypted, wherein the mixed image is obtained by weighting and summing the biological characteristic image and a preset number of other images, the preset number of other images are used as noise images, and the sum of weights of the other images is smaller than a preset threshold value; and subtracting a preset image template from the mixed image to obtain an encrypted image corresponding to the biological characteristic image. The biometric image may be regarded as private data of the user. By adopting the encryption method, the protection of the private data of the user can be realized, and the obtained encrypted image can be used for subsequent model training.

Description

Privacy-protecting image processing method and device
Technical Field
The embodiment of the specification relates to the technical field of data security, in particular to an image processing method and device for protecting privacy and a model training method and device for protecting privacy.
Background
Currently, many products (e.g., financial products, payment products, security products, etc.) use biometric images approved by users, such as face images, fingerprint images, iris images, etc., in large quantities. In addition, a large number of biometric images are also required in the training process of the biometric recognition algorithm.
Since the biometric image is the private data of the user, a reasonable encryption algorithm is needed to protect the private data of the user by encrypting the biometric image, but the encrypted biometric image is not affected for subsequent model training.
Disclosure of Invention
The embodiment of the specification provides an image processing method and device for protecting privacy and a model training method and device for protecting privacy.
In a first aspect, an embodiment of the present specification provides an image processing method for protecting privacy, including: acquiring a mixed image corresponding to a biological characteristic image to be encrypted, wherein the mixed image is obtained by weighting and summing the biological characteristic image and a preset number of other images, the preset number of other images are used as noise images, and the sum of weights of the other images is smaller than a preset threshold value; and subtracting a preset image template from the mixed image to obtain an encrypted image corresponding to the biological characteristic image.
In some embodiments, the biometric image has a corresponding label; and the method further comprises: generating a training sample comprising the encrypted image and the label.
In some embodiments, the biometric image comprises any one of: face image, fingerprint image, finger vein image, iris image.
In some embodiments, the biometric image to be encrypted is any one of a predetermined set of biometric images, and the predetermined number of other images are images of the set of biometric images other than the biometric image to be encrypted.
In some embodiments, the obtaining a blended image corresponding to a biometric image to be encrypted comprises: randomly selecting a preset number of images from the images in the biological characteristic image set except the biological characteristic image to be encrypted; distributing weights to the biological characteristic image to be encrypted and the selected preset number of images; and according to the distributed weight, carrying out weighted summation on the biological characteristic image to be encrypted and the selected preset number of images to obtain the mixed image.
In some embodiments, the assigning weights to the biometric image to be encrypted and the selected predetermined number of images includes: acquiring a preset weight set; and according to the weight set, distributing weights to the biological characteristic image to be encrypted and the selected preset number of images.
In some embodiments, before the obtaining the blended image corresponding to the biometric image to be encrypted, the method further comprises: acquiring a plurality of images; calculating an average value between the plurality of images; determining the average as the image template.
In some embodiments, the plurality of images and the biometric image to be encrypted are the same category of images.
In a second aspect, an embodiment of the present specification provides a model training method for protecting privacy, where the method includes: acquiring a training sample set, wherein the training sample set comprises an encrypted image and a label, the encrypted image is obtained by subtracting a preset image template from a corresponding mixed image, the mixed image is obtained by weighting and summing a corresponding biological characteristic image to be encrypted and a preset number of other images, and the label is the label of the biological characteristic image; and training the deep learning model to be trained according to the training sample set to obtain a biological feature recognition model.
In some embodiments, the biometric image to be encrypted is any one of a predetermined set of biometric images, and the predetermined number of other images are images of the set of biometric images other than the biometric image to be encrypted.
In some embodiments, the biometric image to be encrypted comprises any one of: face image, fingerprint image, finger vein image, iris image.
In some embodiments, the image template is an average value between a plurality of images.
In some embodiments, the plurality of images and the biometric image to be encrypted are the same category of images.
In a third aspect, an embodiment of the present specification provides an image processing apparatus for protecting privacy, the apparatus including: an acquisition unit configured to acquire a mixed image corresponding to a biometric image to be encrypted, wherein the mixed image is obtained by weighted summation of the biometric image and a predetermined number of other images, the predetermined number of other images serving as noise images, and a sum of weights thereof is smaller than a preset threshold; and the encryption unit is configured to subtract a preset image template from the mixed image to obtain an encrypted image corresponding to the biological characteristic image.
In a fourth aspect, an embodiment of the present specification provides a model training apparatus for protecting privacy, including: an acquisition unit configured to acquire a training sample set, wherein a training sample includes an encrypted image and a label, the encrypted image is obtained by subtracting a predetermined image template from a corresponding mixed image, the mixed image is obtained by weighted summation of a biometric image to be encrypted and a predetermined number of other images, and the label is a label of the biometric image; and the training unit is configured to train the deep learning model to be trained according to the training sample set to obtain a biological feature recognition model.
In a fifth aspect, the present specification provides a computer-readable storage medium on which a computer program is stored, wherein when the computer program is executed in a computer, the computer is caused to execute the method described in any implementation manner of the first aspect and the second aspect.
In a sixth aspect, the present specification provides a computing device, including a memory and a processor, where the memory stores executable code, and the processor executes the executable code to implement the method described in any one of the implementation manners of the first aspect and the second aspect.
The image processing method and apparatus for protecting privacy provided by the above embodiments of the present specification may be configured to obtain a mixed image corresponding to a biometric image to be encrypted, where the mixed image is obtained by performing weighted summation on the biometric image and a predetermined number of other images, the predetermined number of other images act as noise images, and the sum of weights of the other images is smaller than a preset threshold, and then subtract a predetermined image template from the mixed image to obtain an encrypted image corresponding to the biometric image. In addition, the preset number of other images are used as noise images, and the sum of the weights of the other images is limited to be smaller than a preset threshold value, so that the encrypted images can be identified from the algorithm and can be used for subsequent model training.
According to the privacy protection model training method and device provided by the above embodiments of the present specification, a training sample set is obtained, where the training sample includes an encrypted image and a label, the encrypted image is obtained by subtracting a predetermined image template from a corresponding mixed image, the mixed image is obtained by performing weighted summation on a corresponding biological feature image to be encrypted and a predetermined number of other images, and the label is a label of the biological feature image, and then a deep learning model to be trained is trained according to the training sample set to obtain a biological feature recognition model, so that model training based on the encrypted image is implemented.
Drawings
In order to more clearly illustrate the embodiments of the present specification or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, other drawings can be obtained from the provided drawings without inventive effort, and that the present description can also be applied to other similar scenarios from the provided drawings. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
FIG. 1 is an exemplary system architecture diagram to which some embodiments of the present description may be applied;
FIG. 2 is a flow diagram of one embodiment of a privacy preserving image processing method according to the present description;
FIG. 3a is a schematic diagram of an encrypted face image acquisition process;
FIG. 3b is another schematic diagram of an encrypted face image acquisition process;
FIG. 3c is a schematic diagram of an application scenario of the privacy preserving image processing method according to the present description;
FIG. 4 is a flow diagram of yet another embodiment of a privacy preserving image processing method according to the present description;
FIG. 5 is a flow diagram for one embodiment of a privacy preserving model training method in accordance with the present description;
FIG. 6 is a schematic diagram of an application scenario of a privacy preserving model training method according to the present description;
fig. 7 is a schematic view of a configuration of an image processing apparatus for protecting privacy according to the present specification;
fig. 8 is a schematic structural diagram of a privacy-preserving model training apparatus according to the present specification.
Detailed Description
The present specification will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. The described embodiments are only a subset of the embodiments described herein and not all embodiments described herein. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step are within the scope of the present application.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present description may be combined with each other without conflict.
As previously mentioned, many products (e.g., financial products, payment products, security products, etc.) now make extensive use of user-approved biometric images. For example, existing face-brushing payment products use human face images licensed by users in large quantities. In addition, a large number of biological characteristic images are also needed in the training process of the biological characteristic recognition algorithm.
Since the biometric image is the private data of the user, a reasonable encryption algorithm is needed to protect the private data of the user by encrypting the biometric image, but the encrypted biometric image is not affected for subsequent model training.
Based on this, some embodiments of the present specification disclose a privacy-preserving image processing method, a privacy-preserving model training method, respectively. In particular, FIG. 1 illustrates an exemplary system architecture diagram suitable for use with these embodiments.
As shown in fig. 1, an image processing side 100 and a model training side 101 are shown. The image processing terminal 100 may be configured to provide an image processing service, and further may be configured to provide an image encryption service. The model training terminal 101 may be used to provide model training services. Specifically, the image processing terminal 100 may encrypt the biometric image to be encrypted to obtain an encrypted image corresponding to the biometric image. Wherein the biometric image may have a corresponding label. The encrypted image and the label may be used to compose a training sample. The model training terminal 101 may obtain a training sample set composed of such training samples, and perform model training according to the training sample set to obtain a corresponding biological feature recognition model.
It should be noted that the image processing terminal 100 may be implemented as a server terminal or a client terminal. The client may be a terminal device, or software installed on the terminal device. The model training side 101 may be implemented as a server side. The server can be a cloud platform, a single server or a server cluster.
It should be noted that, when the image processing terminal 100 is implemented as a server, the image processing terminal 100 and the model training terminal 101 may be the same server or different servers, which is not limited herein.
It should be understood that the number of image processing terminals and model training terminals shown in FIG. 1 is merely illustrative. There may be any number of image processing and model training ends, as desired for the implementation.
The specific steps of the above method are described below with reference to specific examples.
Referring to fig. 2, a flow 200 of one embodiment of a privacy preserving image processing method is shown. The method is applied to the image processing terminal 100 shown in fig. 1, and comprises the following steps:
step 201, acquiring a mixed image corresponding to a biological characteristic image to be encrypted, wherein the mixed image is obtained by performing weighted summation on the biological characteristic image and a predetermined number of other images, the predetermined number of other images are used as noise images, and the sum of weights of the other images is smaller than a preset threshold value;
and 202, subtracting a preset image template from the mixed image to obtain an encrypted image corresponding to the biological characteristic image to be encrypted.
Next, step 201 and step 202 will be described in detail.
With respect to step 201, the image processing side may perform this step in response to receiving an encryption request relating to a biometric image to be encrypted. The encryption request may be triggered manually or automatically, and is not limited herein.
For example, if the encryption request is triggered manually, the encryption request may be sent by an image manager through a terminal device, for example. If the encryption request is triggered automatically, the encryption request may be triggered by a timed task.
In step 201, the biometric image to be encrypted may include any one of the following: face images, fingerprint images, finger vein images, iris images, voice print images, palm print images, and the like.
The predetermined number may be an integer greater than or equal to 1. It should be understood that the predetermined number may be set according to actual requirements, and is not specifically limited herein.
The weight may be a numerical value within [0,1 ]. The sum of the weights of the biometric image to be encrypted and the predetermined number of other images may be equal to 1. In addition, the preset threshold may be a numerical value not greater than the weight of the biometric image to be encrypted. In an alternative implementation, the sum of the weights of the biometric image to be encrypted and the predetermined number of other images is equal to 1, the preset threshold is smaller than the weight of the biometric image to be encrypted, and the sum of the weights of the predetermined number of other images is smaller than the preset threshold. It should be understood that the preset threshold may be set according to actual requirements, and is not specifically limited herein.
It should be noted that the mixed image may be made a noisy image of the biometric image to be encrypted by applying a predetermined number of other images as a noisy image and limiting the sum of their weights to be smaller than a preset threshold value. The blended image appears visually blurry, but in general, the contour pattern is also visible. Thus, for subsequent encrypted images obtained by subtracting the predetermined image template from the blended image, the image can be made more blurred, but algorithmically recognizable, so that the encrypted image can be used for subsequent model training.
In practice, the image processing end may adopt various methods to obtain a mixed image corresponding to the biometric image to be encrypted.
For example, if a mixed image corresponding to a biometric image to be encrypted has been generated in advance, the image processing side may acquire the mixed image from a specific storage location.
For another example, if the mixed image corresponding to the biometric image to be encrypted is not generated in advance, the image processing side may perform the following mixed image acquisition steps:
a, randomly selecting a preset number of images from any stored images except a biological characteristic image to be encrypted;
b, distributing weights for the biological characteristic image to be encrypted and the selected preset number of images;
and c, carrying out weighted summation on the biological characteristic image to be encrypted and the selected preset number of images according to the distributed weights to obtain a mixed image.
In step a, the selected predetermined number of images and the biometric images to be encrypted may be images of the same category or images of different categories. Because the noise is applied to the biometric images to be encrypted by using the images of the same category, compared with the noise application to the biometric images to be encrypted by using the images of different categories, the obtained mixed image can be more blurred, and the noise application effect is better, so that preferably, the images of the predetermined number selected and the biometric images to be encrypted are the images of the same category.
In addition, the selected predetermined number of images and the biometric images to be encrypted may be from the same image set or from different image sets, and are not limited specifically herein.
In step b, as an optional implementation manner, the biometric image to be encrypted may correspond to a preset weight. For the selected predetermined number of images, weights may be randomly generated for the selected predetermined number of images based on a preset weight generation algorithm.
As another alternative implementation, a predetermined set of weights may be obtained, and weights are assigned to the biometric image to be encrypted and the selected predetermined number of images according to the set of weights. Specifically, the number of weights in the weight set is the same as the number of images of the biometric image to be encrypted and the selected predetermined number of images. The set of weights may include a first weight assigned to the biometric image to be encrypted and a second weight for assigning to a predetermined number of images. In assigning the weights, a first weight in the set of weights may be directly assigned to the biometric image to be encrypted. Further, the second weight in the set of weights may be randomly assigned to the selected predetermined number of images.
In step c, a plurality of weighted summation methods may be used for the biometric image to be encrypted and the selected predetermined number of images.
As an example, the biometric image to be encrypted and the pixel characteristics of the selected predetermined number of images may be weighted and summed according to the assigned weights.
As still another example, feature vectors extracted from a biometric image to be encrypted and a predetermined number of selected images, respectively, may be acquired, and then the acquired feature vectors may be subjected to weighted summation according to assigned weights.
As another example, image features obtained by processing the biometric image to be encrypted and the feature vectors of a predetermined number of selected images, respectively, may be acquired, and then the acquired image features may be subjected to weighted summation according to the assigned weights.
In step 202, the predetermined image template may be an average value between a plurality of images. The multiple images and the biometric images to be encrypted may be images of the same category or images of different categories, which is not specifically limited herein. Since the image template generated based on the image belonging to the same category as the biometric image to be encrypted is applied to the encryption process of the biometric image to be encrypted, the resulting encrypted image can be made to look blurry and the encryption effect is better, and therefore, the plurality of images and the biometric image to be encrypted are preferably images of the same category.
In addition, the predetermined image template may be generated by the image processing side, or may be generated by other service sides, and is not limited specifically herein. If the predetermined image template is generated by the image processing side, before step 201, the image processing side may perform the following image template acquisition operations: acquiring a plurality of images; calculating an average value between the plurality of images; the average is determined as the image template.
Wherein, the plurality of images can be randomly sampled. The present embodiment does not specifically limit the manner of acquiring the plurality of images.
As shown in fig. 3a, it shows a schematic diagram of the encrypted face image acquisition process. The acquisition process is suitable for the condition that the biological characteristic image to be encrypted is a face image, the preset number is 1, and the image template is a face image template. Specifically, fig. 3a shows a face image P1 to be encrypted, a noise face image P2 for adding noise to the face image P1 to be encrypted, a weight S1 assigned to the face image P1 to be encrypted, a weight S2 assigned to the noise face image P2, a mixed image M obtained by performing weighted summation on the face image P1 to be encrypted and the noise face image P2 according to the assigned weights, and an encrypted face image obtained by subtracting a predetermined face image template from the mixed image M. Wherein S2 is smaller than the preset threshold, for example, S2 may be 0.1, and the preset threshold may be 0.15. Further, S1 may be 0.9.
As shown in fig. 3b, which shows yet another schematic diagram of the encrypted face image acquisition process. The acquisition process is suitable for the condition that the biological characteristic image to be encrypted is a face image, the preset number is 2, and the preset image template is a face image template. Specifically, fig. 3b shows a face image P1 to be encrypted, a noise face image P2 and a noise face image P3 for adding noise to the face image P1 to be encrypted, a weight S1 assigned to the face image P1 to be encrypted, a weight S2 assigned to the noise face image P2, a weight S3 assigned to the noise face image P3, a mixed image M obtained by performing weighted summation on the face image P1 to be encrypted, the noise face image P2 and the noise face image P3 according to the assigned weights, and an encrypted face image obtained by subtracting a predetermined face image template from the mixed image M. The sum of S2 and S3 is smaller than a preset threshold, for example, S2 and S3 may both be 0.1, and the preset threshold may be 0.25. Further, S1 may be 0.8.
It should be understood that the weights S1, S2, S3 and the preset threshold values shown above are only used as an exemplary illustration and do not limit the present specification in any way.
In some optional implementations of this embodiment, the image processing side may store the generated encrypted image in a specific storage location. Alternatively, the generated encrypted images may be sent to a model training end (e.g., the model training end 101 shown in fig. 1) for the model training end to perform model training based on the encrypted images.
In some optional implementations of the present embodiment, the biometric image to be encrypted may have a corresponding tag. The tag may be used, for example, to characterize the user to which the biometric image pertains. After obtaining the encrypted image corresponding to the biometric image, the image processing end may generate a training sample including the encrypted image and the label. The training samples can be used for training of a biometric recognition model. Thereafter, the image processing side may store the generated training samples to a specific storage location. Or, the generated training samples may be sent to the model training terminal, so that the model training terminal performs a model training operation based on a training sample set formed by the received training samples.
With continuing reference to fig. 3c, fig. 3c is a schematic diagram of an application scenario of the privacy-preserving image processing method according to the present embodiment. In the application scenario, the biometric image to be encrypted includes a face image. The predetermined number is 1. The predetermined image template is a face image template. The image processing side is a server side, and the image processing side can execute the process 200 in real time.
When an image manager needs to encrypt a face image of a user, the image manager may perform an encryption operation related to the face image to be encrypted through a terminal device used by the image manager as indicated by reference numeral 301. Thereafter, the terminal device may send an encryption request related to the face image to be encrypted to the image processing terminal in response to the encryption operation, as indicated by reference numeral 302. Then, as indicated by reference numeral 303, the image processing side may obtain, according to the request, a mixed image corresponding to the face image to be encrypted by using any one of the above-described obtaining means for obtaining a mixed image, where the mixed image is obtained by performing weighted summation on the face image and 1 other face image, and the 1 other face image serves as a noise image and has a weight smaller than a preset threshold (for example, 0.15). Then, the image processing end may subtract a predetermined face image template from the mixed image as shown by reference numeral 304 to obtain an encrypted face image corresponding to the face image to be encrypted.
Thereafter, the image processing side may perform, for example, any of:
storing the encrypted face image to a specific storage position;
sending the encrypted face image to a model training end;
generating a training sample comprising an encrypted face image and a label of the face image to be encrypted corresponding to the encrypted face image, and storing the training sample to a specific storage position;
generating a training sample comprising the encrypted face image and a label of the face image to be encrypted corresponding to the encrypted face image, and sending the training sample to a model training end.
It should be noted that, a third new face image (for example, a mixed image) is obtained by performing weighted summation on two face images (for example, 1 face image to be encrypted and 1 face image serving as a noise image), and in the calculation process, a binary linear equation is equivalently used. The original two face images cannot be obtained by analysis only based on the new face image obtained by calculation. By applying the irreversible calculation to the face image encryption protection, the face image can be more private and irreversible, and the better face recognition model obtained by using the encrypted face image for training is not influenced.
It should be noted that, those skilled in the art can derive other types of biometric image embodiments by analogy from the contents related to the face image shown in fig. 3c, and this specification does not list them one by one.
The image processing method for protecting privacy provided by the embodiment obtains a mixed image corresponding to a biometric image to be encrypted, wherein the mixed image is obtained by performing weighted summation on the biometric image and a predetermined number of other images, the predetermined number of other images act as noise images, the sum of the weights is smaller than a preset threshold value, and then the mixed image is subtracted from a predetermined image template to obtain an encrypted image corresponding to the biometric image. In addition, the preset number of other images are used as noise images, and the sum of the weights of the other images is limited to be smaller than a preset threshold value, so that the encrypted images can be identified from the algorithm and can be used for subsequent model training.
With further reference to fig. 4, a flow 400 of yet another embodiment of a privacy preserving image processing method is shown. The method is applied to the image processing terminal 100 shown in fig. 1, and comprises the following steps:
step 401, taking any one image in a preset biological characteristic image set as a biological characteristic image to be encrypted, and randomly selecting a preset number of images from the images in the biological characteristic image set except the biological characteristic image to be encrypted;
step 402, acquiring a preset weight set;
step 403, distributing weights to the biological characteristic image to be encrypted and the selected preset number of images according to the weight set;
step 404, performing weighted summation on the biological characteristic image to be encrypted and the selected preset number of images according to the distributed weights to obtain a mixed image;
step 405, subtracting a preset image template from the mixed image to obtain an encrypted image corresponding to the biological characteristic image to be encrypted;
at step 406, a training sample including the encrypted image and a label of the biometric image to be encrypted is generated.
Specifically, in step 401, the biometric image set may be a set formed of biometric images of the same category. The biometric image may include any of the following: face images, fingerprint images, finger vein images, iris images, voice print images, palm print images, and the like.
For the explanation of steps 402-406, reference may be made to the related description in the corresponding embodiment of fig. 2, and the description is not repeated here.
Based on the content of step 401, it can be seen that the biometric image to be encrypted and the predetermined number of other images used for generating the mixed image in the present embodiment are images of the same category, and both are derived from the same image set, i.e., the predetermined biometric image set as described above.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the image processing method for protecting privacy provided by the present embodiment highlights a step of expanding an acquisition method of a mixed image, and a step of generating a training sample including an encrypted image and a label of a biometric image to be encrypted. Therefore, the scheme described in the embodiment can enrich the functions of the image processing end. In addition, the image to be encrypted for generating the mixed image and a preset number of other images are limited to be derived from the same biological characteristic image set, so that the obtained mixed image can be more blurred, the obtained encrypted image can have a better encryption effect, and the protection of the privacy data of the user can be enhanced.
With further reference to FIG. 5, a flow 500 of one embodiment of a privacy preserving model training method is illustrated. The model training method is applied to a model training terminal 101 shown in fig. 1, and comprises the following steps:
step 501, a training sample set is obtained, wherein the training sample set comprises an encrypted image and a label, the encrypted image is obtained by subtracting a preset image template from a corresponding mixed image, the mixed image is obtained by weighting and summing a corresponding biological characteristic image to be encrypted and a preset number of other images, and the label is the label of the biological characteristic image;
and 502, training the deep learning model to be trained according to the training sample set to obtain a biological feature recognition model.
Next, step 501 and step 502 will be described in detail.
In step 501, the encrypted images in the training samples may be generated by using the privacy-preserving image processing method provided by the embodiments shown in fig. 2 or fig. 4, respectively. The biometric image to be encrypted corresponding to the encrypted image may be any one of a predetermined set of biometric images. The predetermined number of other images, which function as noise images, may be images other than the biometric image in the set of biometric images.
The biometric image to be encrypted may include any one of the following images: face images, fingerprint images, finger vein images, iris images, voice print images, palm print images, and the like.
The image template may be an average value between a plurality of images. The multiple images and the biometric image to be encrypted may be the same type of image or different types of images. Since the image template generated based on the image belonging to the same category as the biometric image to be encrypted is applied to the encryption process of the biometric image to be encrypted, the obtained encrypted image can be made to look blurry, and the encryption effect is better, and therefore, the plurality of images and the biometric image to be encrypted are preferably images of the same category.
It should be noted that step 501 may be performed by the model training end in response to receiving the model training request. The model training request may be triggered manually or automatically, and is not limited in this respect.
For example, if the model training request is triggered by a human, the model training request may be sent by a model administrator through a terminal device. If the model training request is automatically triggered, the model training request may be automatically sent by an image processing side (e.g., the image processing side 100 shown in fig. 1) after the image processing flow (e.g., the flow 200 or the flow 400) is executed.
In practice, different methods may be employed to obtain the training sample set.
For example, if the training samples in the training sample set have been generated in advance, the training sample set formed by at least one training sample may be obtained from a specific storage location.
For another example, if the training samples in the training sample set are not generated in advance, the encrypted image set and the labels of the biometric images to be encrypted corresponding to the encrypted images in the encrypted image set may be obtained first. The encrypted images in the encrypted image set are generated by adopting the image processing method for protecting privacy, which is described in the embodiment corresponding to fig. 2. Then, for any one encrypted image in the encrypted image set, the encrypted image and the label of the biometric image to be encrypted corresponding to the encrypted image can be combined into a training sample. Thus, the formed set of training samples can be used as a training sample set.
In step 502, the deep learning model to be trained may be an untrained or an untrained completed model. It should be understood that the Deep learning model in this embodiment may be any type of Deep learning model, including but not limited to Convolutional Neural Network (CNN), recurrent Neural Network (RNN), deep Belief Network (DBN), and so on.
When the deep learning model to be trained is trained, different training modes can be adopted.
As an example, the encrypted images included in the training samples in the training sample set may be used as input, the labels corresponding to the input encrypted images may be used as output, and the deep learning model to be trained is trained to obtain the corresponding biometric feature recognition model.
As another example, for a training sample in a set of training samples, image features may be extracted from an encrypted image of the training sample. Then, the image features of the encrypted images included in the training samples in the training sample set can be used as input, the labels corresponding to the input image features can be used as output, and the deep learning model to be trained is trained to obtain the corresponding biological feature recognition model. It should be understood that the label corresponding to the input image feature is a label in the training sample in which the encrypted image to which the image feature belongs is located.
The following exemplifies a model training process by taking an image feature of an encrypted image included in a training sample as an input.
For example, the following model training operations may be performed: sequentially inputting the image characteristics of the encrypted images included in each training sample in the training sample set into a deep learning model to be trained to obtain a prediction result corresponding to the image characteristics of the encrypted images included in each training sample in the training sample set; comparing the label corresponding to the image characteristic of the encrypted image included in each training sample in the training sample set with the prediction result to obtain the prediction accuracy of the deep learning model after the training of the round; determining whether the prediction accuracy is greater than a preset accuracy threshold; and if the accuracy rate is greater than the preset accuracy rate threshold value, taking the deep learning model after the training of the round as a biological feature recognition model.
In addition, if the prediction accuracy is not greater than the preset accuracy threshold, the deep learning model after the current round of training can be used as the deep learning model to be trained, and the model training operation is continuously executed.
Note that the encrypted image is visually blurred. Based on the training sample set, the biometric feature recognition model is obtained through training, so that the biometric feature recognition model can better recognize fuzzy biometric feature images, and the biometric feature recognition model in the embodiment has higher recognition performance.
Therefore, when the trained biological feature recognition model is subsequently applied to various biological feature recognition scenes, such as but not limited to a payment scene, an account login scene, a security check scene and the like based on biological feature recognition, the recognition efficiency and the recognition accuracy can be improved.
With continuing reference to fig. 6, fig. 6 is a schematic diagram of an application scenario of the privacy-preserving model training method according to the present embodiment. In the application scenario, the biometric image to be encrypted is a face image. The encrypted image is an encrypted face image. The model training side is the server side, and the model training side can execute the process 500 as described above in real time.
Further, training samples have been generated in advance and stored in the storage location L1. The training sample includes an encrypted face image and a label. The label is the label of the face image to be encrypted corresponding to the encrypted face image. In addition, the deep learning model to be trained is stored in the storage location L2.
When a face recognition model needs to be trained, a model administrator may execute a model training operation through a terminal device used by the model administrator as shown by reference numeral 601. Wherein the model training operation points to storage location L1 and storage location L2. Thereafter, the terminal device may send a model training request to the model training terminal in response to the model training operation, as indicated by reference numeral 602. The model training request may include addresses of storage location L1 and storage location L2, among others. Then, the model training end may obtain a training sample set formed by a plurality of training samples from the storage location L1 and obtain the deep learning model to be trained from the storage location L2 according to the model training request, as shown by reference numeral 603. Then, as shown by reference numeral 604, the model training end may train the deep learning model to be trained according to the training sample set to obtain the face recognition model.
In the application scenario, the encrypted face image appears blurred visually. The face recognition model is obtained by training based on a training sample set formed by a plurality of training samples containing encrypted face images, so that the face recognition model can better recognize fuzzy face images, and the face recognition model has higher recognition performance.
Therefore, when the trained face recognition model is applied to various face recognition scenes, such as but not limited to a payment scene, an account login scene, a security check scene and the like based on face recognition, the recognition efficiency and the recognition accuracy can be improved.
It should be noted that, those skilled in the art can derive other types of biometric image embodiments by analogy from the contents related to the face image shown in fig. 6, and this specification does not list them one by one.
According to the model training method for protecting privacy, a training sample set is obtained, wherein the training sample comprises an encrypted image and a label, the encrypted image is obtained by subtracting a preset image template from a corresponding mixed image, the mixed image is obtained by performing weighted summation on a biological feature image to be encrypted and a preset number of other images, which correspond to the mixed image, the label is the label of the biological feature image, and then the deep learning model to be trained is trained according to the training sample set to obtain a biological feature recognition model, so that model training based on the encrypted image is realized.
With further reference to fig. 7, as an implementation of the methods shown in some of the above figures, the present specification provides an embodiment of an image processing apparatus for protecting privacy, which corresponds to the method embodiment shown in fig. 2, and which can be applied to the image processing terminal 100 shown in fig. 1.
As shown in fig. 7, the privacy-preserving image processing apparatus 700 of the present embodiment may include: an acquisition unit 701 and an encryption unit 702. Wherein the acquiring unit 701 is configured to acquire a mixed image corresponding to a biometric image to be encrypted, wherein the mixed image is obtained by weighted summation of the biometric image and a predetermined number of other images, the predetermined number of other images serving as noise images, and a sum of weights thereof is smaller than a preset threshold; the encryption unit 702 is configured to subtract a predetermined image template from the mixed image to obtain an encrypted image corresponding to the biometric image.
In this embodiment, specific processing of the obtaining unit 701 and the encrypting unit 702 and technical effects brought by the processing can refer to related descriptions of step 201 and step 202 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of this embodiment, the biometric image may have a corresponding label; and the apparatus 700 may further include: a generating unit (not shown in the figure) configured to generate a training sample comprising the encrypted image and the label.
In some optional implementations of this embodiment, the biometric image may include any one of the following: face images, fingerprint images, finger vein images, iris images, voice print images, palm print images, and the like.
In some optional implementations of the embodiment, the biometric image to be encrypted is any one of a predetermined set of biometric images, and the predetermined number of other images are images other than the biometric image to be encrypted in the set of biometric images.
In some optional implementation manners of this embodiment, the obtaining unit 701 may include: a selecting subunit (not shown in the figure) configured to randomly select a predetermined number of images from the images other than the biometric image to be encrypted in the biometric image set; an assigning subunit (not shown in the figure) configured to assign weights to the biometric image to be encrypted and the selected predetermined number of images; and an acquiring subunit (not shown in the figure) configured to perform weighted summation on the biometric image to be encrypted and the selected predetermined number of images according to the assigned weights, so as to obtain a mixed image.
In some optional implementations of this embodiment, the allocation subunit may be further configured to: acquiring a preset weight set; and according to the weight set, distributing weights to the biological characteristic image to be encrypted and the selected preset number of images.
In some optional implementations of this embodiment, the apparatus 700 may further include: an image template determination unit (not shown in the figure) configured to acquire a plurality of images; calculating an average value between the plurality of images; the average is determined as the image template.
In some optional implementations of the embodiment, the plurality of images and the biometric image to be encrypted are images of the same category.
In the image processing apparatus for protecting privacy provided by this embodiment, the obtaining unit obtains a mixed image corresponding to a biometric image to be encrypted, where the mixed image is obtained by performing weighted summation on the biometric image and a predetermined number of other images, the predetermined number of other images act as noise images, and the sum of weights thereof is smaller than a preset threshold, and then the encrypting unit subtracts a predetermined image template from the mixed image to obtain an encrypted image corresponding to the biometric image. In addition, by applying the predetermined number of other images as noise images and limiting the sum of the weights thereof to be smaller than a preset threshold value, the encrypted images can be algorithmically recognized and thus can be used for subsequent model training.
With further reference to fig. 8, as an implementation of the methods shown in some of the above figures, the present specification provides an embodiment of a privacy-preserving model training apparatus, which corresponds to the embodiment of the method shown in fig. 5, and which can be applied to the model training terminal 101 shown in fig. 1.
As shown in fig. 8, the privacy protection model training apparatus 800 of the present embodiment includes: an acquisition unit 801 and a training unit 802. The obtaining unit 801 is configured to obtain a training sample set, where a training sample includes an encrypted image and a label, the encrypted image is obtained by subtracting a predetermined image template from a corresponding mixed image, the mixed image is obtained by weighted summation of a biometric image to be encrypted and a predetermined number of other images, and the label is a label of the biometric image; the training unit 802 is configured to train the deep learning model to be trained according to the training sample set, resulting in a biometric recognition model.
In this embodiment, the detailed processing of the obtaining unit 801 and the training unit 802 and the technical effects brought by the detailed processing may refer to the related descriptions of step 501 and step 502 in the corresponding embodiment of fig. 5, which are not repeated herein.
In some optional implementations of the embodiment, the biometric image to be encrypted may be any one of a predetermined set of biometric images, and the predetermined number of other images may be images other than the biometric image to be encrypted in the set of biometric images.
In some optional implementations of this embodiment, the biometric image to be encrypted may include any one of the following: face images, fingerprint images, finger vein images, iris images, voice print images, palm print images, and the like.
In some optional implementations of this embodiment, the image template may be an average value between a plurality of images.
In some optional implementations of the embodiment, the plurality of images and the biometric image to be encrypted may be images of the same category.
According to the model training device for protecting privacy, the training sample set is obtained through the obtaining unit, the training sample comprises the encrypted image and the label, the encrypted image is obtained by subtracting the preset image template from the corresponding mixed image, the mixed image is obtained by weighting and summing the corresponding biological feature image to be encrypted and the preset number of other images, the label is the label of the biological feature image, then the model training unit trains the deep learning model to be trained according to the training sample set to obtain the biological feature recognition model, and model training based on the encrypted image is achieved.
Embodiments of the present specification further provide a computer-readable storage medium on which a computer program is stored, wherein when the computer program is executed in a computer, the computer is caused to execute the privacy-preserving image processing method or the privacy-preserving model training method respectively shown in the above method embodiments.
The embodiment of the present specification further provides a computing device, which includes a memory and a processor, where the memory stores executable code, and the processor executes the executable code to implement the privacy-protecting image processing method or the privacy-protecting model training method shown in the above method embodiments, respectively.
The present specification also provides a computer program product, which when executed on a data processing apparatus, causes the data processing apparatus to implement the privacy-preserving image processing method or the privacy-preserving model training method respectively shown in the above method embodiments.
Those skilled in the art will recognize that the functionality described in the various embodiments disclosed herein may be implemented in hardware, software, firmware, or any combination thereof, in one or more of the examples described above. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above-mentioned embodiments, objects, technical solutions and advantages of the embodiments disclosed in the present specification are further described in detail, it should be understood that the above-mentioned embodiments are only specific embodiments of the embodiments disclosed in the present specification, and are not intended to limit the scope of the embodiments disclosed in the present specification, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the embodiments disclosed in the present specification should be included in the scope of the embodiments disclosed in the present specification.

Claims (16)

1. An image processing method for protecting privacy, comprising:
acquiring a mixed image corresponding to a biological feature image to be encrypted, wherein the mixed image is obtained by weighting and summing the biological feature image and a preset number of other images, the preset number of other images are used as noise images, the sum of weights of the other images is smaller than a preset threshold, the preset threshold is not larger than the weight of the biological feature image, and the biological feature image has a corresponding label;
subtracting a preset image template from the mixed image to obtain an encrypted image corresponding to the biological characteristic image;
and forming a training sample by using the encrypted image and the label for training to obtain a biological characteristic recognition model.
2. The method of claim 1, wherein the biometric image comprises any one of: face image, fingerprint image, finger vein image, iris image.
3. The method according to claim 1, wherein the biometric image to be encrypted is any one of a predetermined set of biometric images, and the predetermined number of other images are images of the set of biometric images other than the biometric image to be encrypted.
4. The method of claim 3, wherein the obtaining a blended image corresponding to a biometric image to be encrypted comprises:
randomly selecting a preset number of images from the images in the biological characteristic image set except the biological characteristic image to be encrypted;
distributing weights to the biological characteristic image to be encrypted and the selected preset number of images;
and according to the distributed weight, carrying out weighted summation on the biological characteristic image to be encrypted and the selected preset number of images to obtain the mixed image.
5. The method according to claim 4, wherein the assigning weights to the biometric image to be encrypted and the selected predetermined number of images comprises:
acquiring a preset weight set;
and according to the weight set, distributing weights to the biological characteristic image to be encrypted and the selected preset number of images.
6. The method according to one of claims 1 to 5, wherein prior to said obtaining a blended image corresponding to a biometric image to be encrypted, the method further comprises:
acquiring a plurality of images;
calculating an average value between the plurality of images;
determining the average as the image template.
7. The method according to claim 6, wherein the plurality of images and the biometric image to be encrypted are images of the same category.
8. A privacy preserving model training method, comprising:
obtaining a training sample set, wherein the training sample set comprises an encrypted image and a label, the encrypted image is obtained by subtracting a preset image template from a corresponding mixed image, the mixed image is obtained by weighting and summing a corresponding biological characteristic image to be encrypted and a preset number of other images, the label is the label of the biological characteristic image, the sum of the weights of the preset number of other images is smaller than a preset threshold, and the preset threshold is not larger than the weight of the biological characteristic image;
and training the deep learning model to be trained according to the training sample set to obtain a biological feature recognition model.
9. The method according to claim 8, wherein the biometric image to be encrypted is any one of a predetermined set of biometric images, and the predetermined number of other images are images of the set of biometric images other than the biometric image to be encrypted.
10. The method according to claim 8, wherein the biometric image to be encrypted comprises any one of: face image, fingerprint image, finger vein image, iris image.
11. The method according to one of claims 8 to 10, wherein the image template is an average value between a plurality of images.
12. The method according to claim 11, wherein the plurality of images and the biometric image to be encrypted are images of the same category.
13. An image processing apparatus that protects privacy, comprising:
an acquisition unit configured to acquire a mixed image corresponding to a biometric image to be encrypted, wherein the mixed image is obtained by weighted summation of the biometric image and a predetermined number of other images, the predetermined number of other images serving as noise images, a sum of weights of the other images being smaller than a preset threshold value, the preset threshold value being not greater than a weight of the biometric image, the biometric image having a corresponding label;
an encryption unit configured to subtract a predetermined image template from the mixed image to obtain an encrypted image corresponding to the biometric image;
and the generating unit is configured to form a training sample by the encrypted image and the label, so as to train the training sample to obtain the biological feature recognition model.
14. A privacy preserving model training apparatus comprising:
an obtaining unit configured to obtain a training sample set, wherein a training sample is composed of an encrypted image and a label, the encrypted image is obtained by subtracting a predetermined image template from a corresponding mixed image, the mixed image is obtained by performing weighted summation on a corresponding biological feature image to be encrypted and a predetermined number of other images, the label is a label of the biological feature image, the sum of the weights of the predetermined number of other images is smaller than a preset threshold, and the preset threshold is not larger than the weight of the biological feature image;
and the training unit is configured to train the deep learning model to be trained according to the training sample set to obtain a biological feature recognition model.
15. A computer-readable storage medium, on which a computer program is stored, wherein the computer program, when executed in a computer, causes the computer to perform the method of any of claims 1-12.
16. A computing device comprising a memory and a processor, wherein the memory has stored therein executable code that when executed by the processor implements the method of any of claims 1-12.
CN202010442259.7A 2020-05-22 2020-05-22 Image processing method and device for protecting privacy Active CN111539008B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010442259.7A CN111539008B (en) 2020-05-22 2020-05-22 Image processing method and device for protecting privacy

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010442259.7A CN111539008B (en) 2020-05-22 2020-05-22 Image processing method and device for protecting privacy

Publications (2)

Publication Number Publication Date
CN111539008A CN111539008A (en) 2020-08-14
CN111539008B true CN111539008B (en) 2023-04-11

Family

ID=71978163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010442259.7A Active CN111539008B (en) 2020-05-22 2020-05-22 Image processing method and device for protecting privacy

Country Status (1)

Country Link
CN (1) CN111539008B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112035834A (en) * 2020-08-28 2020-12-04 北京推想科技有限公司 Countermeasure training method and device, and application method and device of neural network model
CN113312668A (en) * 2021-06-08 2021-08-27 支付宝(杭州)信息技术有限公司 Image identification method, device and equipment based on privacy protection

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231748A (en) * 2007-12-18 2008-07-30 西安电子科技大学 Image anastomosing method based on singular value decomposition
CN102592269A (en) * 2012-01-11 2012-07-18 西安电子科技大学 Compressive-sensing-based object reconstruction method
WO2015070678A1 (en) * 2013-11-15 2015-05-21 北京奇虎科技有限公司 Image recognition method, and method and device for mining main body information about image
CN106203533A (en) * 2016-07-26 2016-12-07 厦门大学 The degree of depth based on combined training study face verification method
CN106303233A (en) * 2016-08-08 2017-01-04 西安电子科技大学 A kind of video method for secret protection merged based on expression
CN107451546A (en) * 2017-07-14 2017-12-08 广东欧珀移动通信有限公司 Iris identification method and related product
CN108520181A (en) * 2018-03-26 2018-09-11 联想(北京)有限公司 data model training method and device
CN108846817A (en) * 2018-06-22 2018-11-20 Oppo(重庆)智能科技有限公司 Image processing method, device and mobile terminal
CN110610469A (en) * 2019-08-01 2019-12-24 长沙理工大学 Face image privacy protection method, device, equipment and storage medium
CN110633650A (en) * 2019-08-22 2019-12-31 首都师范大学 Convolutional neural network face recognition method and device based on privacy protection
CN110648289A (en) * 2019-08-29 2020-01-03 腾讯科技(深圳)有限公司 Image denoising processing method and device
CN110827204A (en) * 2018-08-14 2020-02-21 阿里巴巴集团控股有限公司 Image processing method and device and electronic equipment
CN111159773A (en) * 2020-04-01 2020-05-15 支付宝(杭州)信息技术有限公司 Picture classification method and device for protecting data privacy

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006318103A (en) * 2005-05-11 2006-11-24 Fuji Photo Film Co Ltd Image processor, image processing method, and program
US10643319B2 (en) * 2018-01-30 2020-05-05 Canon Medical Systems Corporation Apparatus and method for context-oriented blending of reconstructed images

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101231748A (en) * 2007-12-18 2008-07-30 西安电子科技大学 Image anastomosing method based on singular value decomposition
CN102592269A (en) * 2012-01-11 2012-07-18 西安电子科技大学 Compressive-sensing-based object reconstruction method
WO2015070678A1 (en) * 2013-11-15 2015-05-21 北京奇虎科技有限公司 Image recognition method, and method and device for mining main body information about image
CN106203533A (en) * 2016-07-26 2016-12-07 厦门大学 The degree of depth based on combined training study face verification method
CN106303233A (en) * 2016-08-08 2017-01-04 西安电子科技大学 A kind of video method for secret protection merged based on expression
CN107451546A (en) * 2017-07-14 2017-12-08 广东欧珀移动通信有限公司 Iris identification method and related product
CN108520181A (en) * 2018-03-26 2018-09-11 联想(北京)有限公司 data model training method and device
CN108846817A (en) * 2018-06-22 2018-11-20 Oppo(重庆)智能科技有限公司 Image processing method, device and mobile terminal
CN110827204A (en) * 2018-08-14 2020-02-21 阿里巴巴集团控股有限公司 Image processing method and device and electronic equipment
CN110610469A (en) * 2019-08-01 2019-12-24 长沙理工大学 Face image privacy protection method, device, equipment and storage medium
CN110633650A (en) * 2019-08-22 2019-12-31 首都师范大学 Convolutional neural network face recognition method and device based on privacy protection
CN110648289A (en) * 2019-08-29 2020-01-03 腾讯科技(深圳)有限公司 Image denoising processing method and device
CN111159773A (en) * 2020-04-01 2020-05-15 支付宝(杭州)信息技术有限公司 Picture classification method and device for protecting data privacy

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于图像重组和比特置乱的多图像加密;郭媛等;《光子学报》;20200430(第04期);全文 *

Also Published As

Publication number Publication date
CN111539008A (en) 2020-08-14

Similar Documents

Publication Publication Date Title
Gomez-Barrero et al. General framework to evaluate unlinkability in biometric template protection systems
Adler Sample images can be independently restored from face recognition templates
CN113159288B (en) Coding model training method and device for preventing private data leakage
Liu et al. Cloud-enabled privacy-preserving collaborative learning for mobile sensing
US20200358611A1 (en) Accurate, real-time and secure privacy-preserving verification of biometrics or other sensitive information
CN111539008B (en) Image processing method and device for protecting privacy
CN106330850A (en) Biological characteristic-based security verification method, client and server
CN111611873A (en) Face replacement detection method and device, electronic equipment and computer storage medium
KR20150110429A (en) Method for enrolling data in a base to protect said data
CN111325352B (en) Model updating method, device, equipment and medium based on longitudinal federal learning
Gabryel et al. Browser fingerprint coding methods increasing the effectiveness of user identification in the web traffic
CN111783630B (en) Data processing method, device and equipment
CN106557732A (en) A kind of identity identifying method and system
Wazzeh et al. Privacy-preserving continuous authentication for mobile and iot systems using warmup-based federated learning
CN112699947A (en) Decision tree based prediction method, apparatus, device, medium, and program product
Pentyala et al. Privacy-preserving video classification with convolutional neural networks
CN110781467A (en) Abnormal business data analysis method, device, equipment and storage medium
CN110572302A (en) Diskless local area network scene identification method and device and terminal
CN114063651A (en) Method for mutual authentication between user and multiple unmanned aerial vehicles and storage medium
CN113807258A (en) Encrypted face recognition method based on neural network and DCT (discrete cosine transformation)
US20240119714A1 (en) Image recognition model training method and apparatus
KR102126795B1 (en) Deep learning-based image on personal information image processing system, apparatus and method therefor
CN108985040A (en) Method and apparatus, storage medium and the processor logged in using cipher key
CN110874638B (en) Behavior analysis-oriented meta-knowledge federation method, device, electronic equipment and system
Jasmine et al. A privacy preserving based multi-biometric system for secure identification in cloud environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40035488

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230329

Address after: 801-10, Section B, 8th floor, 556 Xixi Road, Xihu District, Hangzhou City, Zhejiang Province 310000

Applicant after: Ant financial (Hangzhou) Network Technology Co.,Ltd.

Address before: 310000 801-11 section B, 8th floor, 556 Xixi Road, Xihu District, Hangzhou City, Zhejiang Province

Applicant before: Alipay (Hangzhou) Information Technology Co.,Ltd.