Detailed Description
The present specification will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. The described embodiments are only a subset of the embodiments described herein and not all embodiments described herein. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step are within the scope of the present application.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present description may be combined with each other without conflict.
As previously mentioned, many products (e.g., financial products, payment products, security products, etc.) now make extensive use of user-approved biometric images. For example, existing face-brushing payment products use human face images licensed by users in large quantities. In addition, a large number of biological characteristic images are also needed in the training process of the biological characteristic recognition algorithm.
Since the biometric image is the private data of the user, a reasonable encryption algorithm is needed to protect the private data of the user by encrypting the biometric image, but the encrypted biometric image is not affected for subsequent model training.
Based on this, some embodiments of the present specification disclose a privacy-preserving image processing method, a privacy-preserving model training method, respectively. In particular, FIG. 1 illustrates an exemplary system architecture diagram suitable for use with these embodiments.
As shown in fig. 1, an image processing side 100 and a model training side 101 are shown. The image processing terminal 100 may be configured to provide an image processing service, and further may be configured to provide an image encryption service. The model training terminal 101 may be used to provide model training services. Specifically, the image processing terminal 100 may encrypt the biometric image to be encrypted to obtain an encrypted image corresponding to the biometric image. Wherein the biometric image may have a corresponding label. The encrypted image and the label may be used to compose a training sample. The model training terminal 101 may obtain a training sample set composed of such training samples, and perform model training according to the training sample set to obtain a corresponding biological feature recognition model.
It should be noted that the image processing terminal 100 may be implemented as a server terminal or a client terminal. The client may be a terminal device, or software installed on the terminal device. The model training side 101 may be implemented as a server side. The server can be a cloud platform, a single server or a server cluster.
It should be noted that, when the image processing terminal 100 is implemented as a server, the image processing terminal 100 and the model training terminal 101 may be the same server or different servers, which is not limited herein.
It should be understood that the number of image processing terminals and model training terminals shown in FIG. 1 is merely illustrative. There may be any number of image processing and model training ends, as desired for the implementation.
The specific steps of the above method are described below with reference to specific examples.
Referring to fig. 2, a flow 200 of one embodiment of a privacy preserving image processing method is shown. The method is applied to the image processing terminal 100 shown in fig. 1, and comprises the following steps:
step 201, acquiring a mixed image corresponding to a biological characteristic image to be encrypted, wherein the mixed image is obtained by performing weighted summation on the biological characteristic image and a predetermined number of other images, the predetermined number of other images are used as noise images, and the sum of weights of the other images is smaller than a preset threshold value;
and 202, subtracting a preset image template from the mixed image to obtain an encrypted image corresponding to the biological characteristic image to be encrypted.
Next, step 201 and step 202 will be described in detail.
With respect to step 201, the image processing side may perform this step in response to receiving an encryption request relating to a biometric image to be encrypted. The encryption request may be triggered manually or automatically, and is not limited herein.
For example, if the encryption request is triggered manually, the encryption request may be sent by an image manager through a terminal device, for example. If the encryption request is triggered automatically, the encryption request may be triggered by a timed task.
In step 201, the biometric image to be encrypted may include any one of the following: face images, fingerprint images, finger vein images, iris images, voice print images, palm print images, and the like.
The predetermined number may be an integer greater than or equal to 1. It should be understood that the predetermined number may be set according to actual requirements, and is not specifically limited herein.
The weight may be a numerical value within [0,1 ]. The sum of the weights of the biometric image to be encrypted and the predetermined number of other images may be equal to 1. In addition, the preset threshold may be a numerical value not greater than the weight of the biometric image to be encrypted. In an alternative implementation, the sum of the weights of the biometric image to be encrypted and the predetermined number of other images is equal to 1, the preset threshold is smaller than the weight of the biometric image to be encrypted, and the sum of the weights of the predetermined number of other images is smaller than the preset threshold. It should be understood that the preset threshold may be set according to actual requirements, and is not specifically limited herein.
It should be noted that the mixed image may be made a noisy image of the biometric image to be encrypted by applying a predetermined number of other images as a noisy image and limiting the sum of their weights to be smaller than a preset threshold value. The blended image appears visually blurry, but in general, the contour pattern is also visible. Thus, for subsequent encrypted images obtained by subtracting the predetermined image template from the blended image, the image can be made more blurred, but algorithmically recognizable, so that the encrypted image can be used for subsequent model training.
In practice, the image processing end may adopt various methods to obtain a mixed image corresponding to the biometric image to be encrypted.
For example, if a mixed image corresponding to a biometric image to be encrypted has been generated in advance, the image processing side may acquire the mixed image from a specific storage location.
For another example, if the mixed image corresponding to the biometric image to be encrypted is not generated in advance, the image processing side may perform the following mixed image acquisition steps:
a, randomly selecting a preset number of images from any stored images except a biological characteristic image to be encrypted;
b, distributing weights for the biological characteristic image to be encrypted and the selected preset number of images;
and c, carrying out weighted summation on the biological characteristic image to be encrypted and the selected preset number of images according to the distributed weights to obtain a mixed image.
In step a, the selected predetermined number of images and the biometric images to be encrypted may be images of the same category or images of different categories. Because the noise is applied to the biometric images to be encrypted by using the images of the same category, compared with the noise application to the biometric images to be encrypted by using the images of different categories, the obtained mixed image can be more blurred, and the noise application effect is better, so that preferably, the images of the predetermined number selected and the biometric images to be encrypted are the images of the same category.
In addition, the selected predetermined number of images and the biometric images to be encrypted may be from the same image set or from different image sets, and are not limited specifically herein.
In step b, as an optional implementation manner, the biometric image to be encrypted may correspond to a preset weight. For the selected predetermined number of images, weights may be randomly generated for the selected predetermined number of images based on a preset weight generation algorithm.
As another alternative implementation, a predetermined set of weights may be obtained, and weights are assigned to the biometric image to be encrypted and the selected predetermined number of images according to the set of weights. Specifically, the number of weights in the weight set is the same as the number of images of the biometric image to be encrypted and the selected predetermined number of images. The set of weights may include a first weight assigned to the biometric image to be encrypted and a second weight for assigning to a predetermined number of images. In assigning the weights, a first weight in the set of weights may be directly assigned to the biometric image to be encrypted. Further, the second weight in the set of weights may be randomly assigned to the selected predetermined number of images.
In step c, a plurality of weighted summation methods may be used for the biometric image to be encrypted and the selected predetermined number of images.
As an example, the biometric image to be encrypted and the pixel characteristics of the selected predetermined number of images may be weighted and summed according to the assigned weights.
As still another example, feature vectors extracted from a biometric image to be encrypted and a predetermined number of selected images, respectively, may be acquired, and then the acquired feature vectors may be subjected to weighted summation according to assigned weights.
As another example, image features obtained by processing the biometric image to be encrypted and the feature vectors of a predetermined number of selected images, respectively, may be acquired, and then the acquired image features may be subjected to weighted summation according to the assigned weights.
In step 202, the predetermined image template may be an average value between a plurality of images. The multiple images and the biometric images to be encrypted may be images of the same category or images of different categories, which is not specifically limited herein. Since the image template generated based on the image belonging to the same category as the biometric image to be encrypted is applied to the encryption process of the biometric image to be encrypted, the resulting encrypted image can be made to look blurry and the encryption effect is better, and therefore, the plurality of images and the biometric image to be encrypted are preferably images of the same category.
In addition, the predetermined image template may be generated by the image processing side, or may be generated by other service sides, and is not limited specifically herein. If the predetermined image template is generated by the image processing side, before step 201, the image processing side may perform the following image template acquisition operations: acquiring a plurality of images; calculating an average value between the plurality of images; the average is determined as the image template.
Wherein, the plurality of images can be randomly sampled. The present embodiment does not specifically limit the manner of acquiring the plurality of images.
As shown in fig. 3a, it shows a schematic diagram of the encrypted face image acquisition process. The acquisition process is suitable for the condition that the biological characteristic image to be encrypted is a face image, the preset number is 1, and the image template is a face image template. Specifically, fig. 3a shows a face image P1 to be encrypted, a noise face image P2 for adding noise to the face image P1 to be encrypted, a weight S1 assigned to the face image P1 to be encrypted, a weight S2 assigned to the noise face image P2, a mixed image M obtained by performing weighted summation on the face image P1 to be encrypted and the noise face image P2 according to the assigned weights, and an encrypted face image obtained by subtracting a predetermined face image template from the mixed image M. Wherein S2 is smaller than the preset threshold, for example, S2 may be 0.1, and the preset threshold may be 0.15. Further, S1 may be 0.9.
As shown in fig. 3b, which shows yet another schematic diagram of the encrypted face image acquisition process. The acquisition process is suitable for the condition that the biological characteristic image to be encrypted is a face image, the preset number is 2, and the preset image template is a face image template. Specifically, fig. 3b shows a face image P1 to be encrypted, a noise face image P2 and a noise face image P3 for adding noise to the face image P1 to be encrypted, a weight S1 assigned to the face image P1 to be encrypted, a weight S2 assigned to the noise face image P2, a weight S3 assigned to the noise face image P3, a mixed image M obtained by performing weighted summation on the face image P1 to be encrypted, the noise face image P2 and the noise face image P3 according to the assigned weights, and an encrypted face image obtained by subtracting a predetermined face image template from the mixed image M. The sum of S2 and S3 is smaller than a preset threshold, for example, S2 and S3 may both be 0.1, and the preset threshold may be 0.25. Further, S1 may be 0.8.
It should be understood that the weights S1, S2, S3 and the preset threshold values shown above are only used as an exemplary illustration and do not limit the present specification in any way.
In some optional implementations of this embodiment, the image processing side may store the generated encrypted image in a specific storage location. Alternatively, the generated encrypted images may be sent to a model training end (e.g., the model training end 101 shown in fig. 1) for the model training end to perform model training based on the encrypted images.
In some optional implementations of the present embodiment, the biometric image to be encrypted may have a corresponding tag. The tag may be used, for example, to characterize the user to which the biometric image pertains. After obtaining the encrypted image corresponding to the biometric image, the image processing end may generate a training sample including the encrypted image and the label. The training samples can be used for training of a biometric recognition model. Thereafter, the image processing side may store the generated training samples to a specific storage location. Or, the generated training samples may be sent to the model training terminal, so that the model training terminal performs a model training operation based on a training sample set formed by the received training samples.
With continuing reference to fig. 3c, fig. 3c is a schematic diagram of an application scenario of the privacy-preserving image processing method according to the present embodiment. In the application scenario, the biometric image to be encrypted includes a face image. The predetermined number is 1. The predetermined image template is a face image template. The image processing side is a server side, and the image processing side can execute the process 200 in real time.
When an image manager needs to encrypt a face image of a user, the image manager may perform an encryption operation related to the face image to be encrypted through a terminal device used by the image manager as indicated by reference numeral 301. Thereafter, the terminal device may send an encryption request related to the face image to be encrypted to the image processing terminal in response to the encryption operation, as indicated by reference numeral 302. Then, as indicated by reference numeral 303, the image processing side may obtain, according to the request, a mixed image corresponding to the face image to be encrypted by using any one of the above-described obtaining means for obtaining a mixed image, where the mixed image is obtained by performing weighted summation on the face image and 1 other face image, and the 1 other face image serves as a noise image and has a weight smaller than a preset threshold (for example, 0.15). Then, the image processing end may subtract a predetermined face image template from the mixed image as shown by reference numeral 304 to obtain an encrypted face image corresponding to the face image to be encrypted.
Thereafter, the image processing side may perform, for example, any of:
storing the encrypted face image to a specific storage position;
sending the encrypted face image to a model training end;
generating a training sample comprising an encrypted face image and a label of the face image to be encrypted corresponding to the encrypted face image, and storing the training sample to a specific storage position;
generating a training sample comprising the encrypted face image and a label of the face image to be encrypted corresponding to the encrypted face image, and sending the training sample to a model training end.
It should be noted that, a third new face image (for example, a mixed image) is obtained by performing weighted summation on two face images (for example, 1 face image to be encrypted and 1 face image serving as a noise image), and in the calculation process, a binary linear equation is equivalently used. The original two face images cannot be obtained by analysis only based on the new face image obtained by calculation. By applying the irreversible calculation to the face image encryption protection, the face image can be more private and irreversible, and the better face recognition model obtained by using the encrypted face image for training is not influenced.
It should be noted that, those skilled in the art can derive other types of biometric image embodiments by analogy from the contents related to the face image shown in fig. 3c, and this specification does not list them one by one.
The image processing method for protecting privacy provided by the embodiment obtains a mixed image corresponding to a biometric image to be encrypted, wherein the mixed image is obtained by performing weighted summation on the biometric image and a predetermined number of other images, the predetermined number of other images act as noise images, the sum of the weights is smaller than a preset threshold value, and then the mixed image is subtracted from a predetermined image template to obtain an encrypted image corresponding to the biometric image. In addition, the preset number of other images are used as noise images, and the sum of the weights of the other images is limited to be smaller than a preset threshold value, so that the encrypted images can be identified from the algorithm and can be used for subsequent model training.
With further reference to fig. 4, a flow 400 of yet another embodiment of a privacy preserving image processing method is shown. The method is applied to the image processing terminal 100 shown in fig. 1, and comprises the following steps:
step 401, taking any one image in a preset biological characteristic image set as a biological characteristic image to be encrypted, and randomly selecting a preset number of images from the images in the biological characteristic image set except the biological characteristic image to be encrypted;
step 402, acquiring a preset weight set;
step 403, distributing weights to the biological characteristic image to be encrypted and the selected preset number of images according to the weight set;
step 404, performing weighted summation on the biological characteristic image to be encrypted and the selected preset number of images according to the distributed weights to obtain a mixed image;
step 405, subtracting a preset image template from the mixed image to obtain an encrypted image corresponding to the biological characteristic image to be encrypted;
at step 406, a training sample including the encrypted image and a label of the biometric image to be encrypted is generated.
Specifically, in step 401, the biometric image set may be a set formed of biometric images of the same category. The biometric image may include any of the following: face images, fingerprint images, finger vein images, iris images, voice print images, palm print images, and the like.
For the explanation of steps 402-406, reference may be made to the related description in the corresponding embodiment of fig. 2, and the description is not repeated here.
Based on the content of step 401, it can be seen that the biometric image to be encrypted and the predetermined number of other images used for generating the mixed image in the present embodiment are images of the same category, and both are derived from the same image set, i.e., the predetermined biometric image set as described above.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the image processing method for protecting privacy provided by the present embodiment highlights a step of expanding an acquisition method of a mixed image, and a step of generating a training sample including an encrypted image and a label of a biometric image to be encrypted. Therefore, the scheme described in the embodiment can enrich the functions of the image processing end. In addition, the image to be encrypted for generating the mixed image and a preset number of other images are limited to be derived from the same biological characteristic image set, so that the obtained mixed image can be more blurred, the obtained encrypted image can have a better encryption effect, and the protection of the privacy data of the user can be enhanced.
With further reference to FIG. 5, a flow 500 of one embodiment of a privacy preserving model training method is illustrated. The model training method is applied to a model training terminal 101 shown in fig. 1, and comprises the following steps:
step 501, a training sample set is obtained, wherein the training sample set comprises an encrypted image and a label, the encrypted image is obtained by subtracting a preset image template from a corresponding mixed image, the mixed image is obtained by weighting and summing a corresponding biological characteristic image to be encrypted and a preset number of other images, and the label is the label of the biological characteristic image;
and 502, training the deep learning model to be trained according to the training sample set to obtain a biological feature recognition model.
Next, step 501 and step 502 will be described in detail.
In step 501, the encrypted images in the training samples may be generated by using the privacy-preserving image processing method provided by the embodiments shown in fig. 2 or fig. 4, respectively. The biometric image to be encrypted corresponding to the encrypted image may be any one of a predetermined set of biometric images. The predetermined number of other images, which function as noise images, may be images other than the biometric image in the set of biometric images.
The biometric image to be encrypted may include any one of the following images: face images, fingerprint images, finger vein images, iris images, voice print images, palm print images, and the like.
The image template may be an average value between a plurality of images. The multiple images and the biometric image to be encrypted may be the same type of image or different types of images. Since the image template generated based on the image belonging to the same category as the biometric image to be encrypted is applied to the encryption process of the biometric image to be encrypted, the obtained encrypted image can be made to look blurry, and the encryption effect is better, and therefore, the plurality of images and the biometric image to be encrypted are preferably images of the same category.
It should be noted that step 501 may be performed by the model training end in response to receiving the model training request. The model training request may be triggered manually or automatically, and is not limited in this respect.
For example, if the model training request is triggered by a human, the model training request may be sent by a model administrator through a terminal device. If the model training request is automatically triggered, the model training request may be automatically sent by an image processing side (e.g., the image processing side 100 shown in fig. 1) after the image processing flow (e.g., the flow 200 or the flow 400) is executed.
In practice, different methods may be employed to obtain the training sample set.
For example, if the training samples in the training sample set have been generated in advance, the training sample set formed by at least one training sample may be obtained from a specific storage location.
For another example, if the training samples in the training sample set are not generated in advance, the encrypted image set and the labels of the biometric images to be encrypted corresponding to the encrypted images in the encrypted image set may be obtained first. The encrypted images in the encrypted image set are generated by adopting the image processing method for protecting privacy, which is described in the embodiment corresponding to fig. 2. Then, for any one encrypted image in the encrypted image set, the encrypted image and the label of the biometric image to be encrypted corresponding to the encrypted image can be combined into a training sample. Thus, the formed set of training samples can be used as a training sample set.
In step 502, the deep learning model to be trained may be an untrained or an untrained completed model. It should be understood that the Deep learning model in this embodiment may be any type of Deep learning model, including but not limited to Convolutional Neural Network (CNN), recurrent Neural Network (RNN), deep Belief Network (DBN), and so on.
When the deep learning model to be trained is trained, different training modes can be adopted.
As an example, the encrypted images included in the training samples in the training sample set may be used as input, the labels corresponding to the input encrypted images may be used as output, and the deep learning model to be trained is trained to obtain the corresponding biometric feature recognition model.
As another example, for a training sample in a set of training samples, image features may be extracted from an encrypted image of the training sample. Then, the image features of the encrypted images included in the training samples in the training sample set can be used as input, the labels corresponding to the input image features can be used as output, and the deep learning model to be trained is trained to obtain the corresponding biological feature recognition model. It should be understood that the label corresponding to the input image feature is a label in the training sample in which the encrypted image to which the image feature belongs is located.
The following exemplifies a model training process by taking an image feature of an encrypted image included in a training sample as an input.
For example, the following model training operations may be performed: sequentially inputting the image characteristics of the encrypted images included in each training sample in the training sample set into a deep learning model to be trained to obtain a prediction result corresponding to the image characteristics of the encrypted images included in each training sample in the training sample set; comparing the label corresponding to the image characteristic of the encrypted image included in each training sample in the training sample set with the prediction result to obtain the prediction accuracy of the deep learning model after the training of the round; determining whether the prediction accuracy is greater than a preset accuracy threshold; and if the accuracy rate is greater than the preset accuracy rate threshold value, taking the deep learning model after the training of the round as a biological feature recognition model.
In addition, if the prediction accuracy is not greater than the preset accuracy threshold, the deep learning model after the current round of training can be used as the deep learning model to be trained, and the model training operation is continuously executed.
Note that the encrypted image is visually blurred. Based on the training sample set, the biometric feature recognition model is obtained through training, so that the biometric feature recognition model can better recognize fuzzy biometric feature images, and the biometric feature recognition model in the embodiment has higher recognition performance.
Therefore, when the trained biological feature recognition model is subsequently applied to various biological feature recognition scenes, such as but not limited to a payment scene, an account login scene, a security check scene and the like based on biological feature recognition, the recognition efficiency and the recognition accuracy can be improved.
With continuing reference to fig. 6, fig. 6 is a schematic diagram of an application scenario of the privacy-preserving model training method according to the present embodiment. In the application scenario, the biometric image to be encrypted is a face image. The encrypted image is an encrypted face image. The model training side is the server side, and the model training side can execute the process 500 as described above in real time.
Further, training samples have been generated in advance and stored in the storage location L1. The training sample includes an encrypted face image and a label. The label is the label of the face image to be encrypted corresponding to the encrypted face image. In addition, the deep learning model to be trained is stored in the storage location L2.
When a face recognition model needs to be trained, a model administrator may execute a model training operation through a terminal device used by the model administrator as shown by reference numeral 601. Wherein the model training operation points to storage location L1 and storage location L2. Thereafter, the terminal device may send a model training request to the model training terminal in response to the model training operation, as indicated by reference numeral 602. The model training request may include addresses of storage location L1 and storage location L2, among others. Then, the model training end may obtain a training sample set formed by a plurality of training samples from the storage location L1 and obtain the deep learning model to be trained from the storage location L2 according to the model training request, as shown by reference numeral 603. Then, as shown by reference numeral 604, the model training end may train the deep learning model to be trained according to the training sample set to obtain the face recognition model.
In the application scenario, the encrypted face image appears blurred visually. The face recognition model is obtained by training based on a training sample set formed by a plurality of training samples containing encrypted face images, so that the face recognition model can better recognize fuzzy face images, and the face recognition model has higher recognition performance.
Therefore, when the trained face recognition model is applied to various face recognition scenes, such as but not limited to a payment scene, an account login scene, a security check scene and the like based on face recognition, the recognition efficiency and the recognition accuracy can be improved.
It should be noted that, those skilled in the art can derive other types of biometric image embodiments by analogy from the contents related to the face image shown in fig. 6, and this specification does not list them one by one.
According to the model training method for protecting privacy, a training sample set is obtained, wherein the training sample comprises an encrypted image and a label, the encrypted image is obtained by subtracting a preset image template from a corresponding mixed image, the mixed image is obtained by performing weighted summation on a biological feature image to be encrypted and a preset number of other images, which correspond to the mixed image, the label is the label of the biological feature image, and then the deep learning model to be trained is trained according to the training sample set to obtain a biological feature recognition model, so that model training based on the encrypted image is realized.
With further reference to fig. 7, as an implementation of the methods shown in some of the above figures, the present specification provides an embodiment of an image processing apparatus for protecting privacy, which corresponds to the method embodiment shown in fig. 2, and which can be applied to the image processing terminal 100 shown in fig. 1.
As shown in fig. 7, the privacy-preserving image processing apparatus 700 of the present embodiment may include: an acquisition unit 701 and an encryption unit 702. Wherein the acquiring unit 701 is configured to acquire a mixed image corresponding to a biometric image to be encrypted, wherein the mixed image is obtained by weighted summation of the biometric image and a predetermined number of other images, the predetermined number of other images serving as noise images, and a sum of weights thereof is smaller than a preset threshold; the encryption unit 702 is configured to subtract a predetermined image template from the mixed image to obtain an encrypted image corresponding to the biometric image.
In this embodiment, specific processing of the obtaining unit 701 and the encrypting unit 702 and technical effects brought by the processing can refer to related descriptions of step 201 and step 202 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of this embodiment, the biometric image may have a corresponding label; and the apparatus 700 may further include: a generating unit (not shown in the figure) configured to generate a training sample comprising the encrypted image and the label.
In some optional implementations of this embodiment, the biometric image may include any one of the following: face images, fingerprint images, finger vein images, iris images, voice print images, palm print images, and the like.
In some optional implementations of the embodiment, the biometric image to be encrypted is any one of a predetermined set of biometric images, and the predetermined number of other images are images other than the biometric image to be encrypted in the set of biometric images.
In some optional implementation manners of this embodiment, the obtaining unit 701 may include: a selecting subunit (not shown in the figure) configured to randomly select a predetermined number of images from the images other than the biometric image to be encrypted in the biometric image set; an assigning subunit (not shown in the figure) configured to assign weights to the biometric image to be encrypted and the selected predetermined number of images; and an acquiring subunit (not shown in the figure) configured to perform weighted summation on the biometric image to be encrypted and the selected predetermined number of images according to the assigned weights, so as to obtain a mixed image.
In some optional implementations of this embodiment, the allocation subunit may be further configured to: acquiring a preset weight set; and according to the weight set, distributing weights to the biological characteristic image to be encrypted and the selected preset number of images.
In some optional implementations of this embodiment, the apparatus 700 may further include: an image template determination unit (not shown in the figure) configured to acquire a plurality of images; calculating an average value between the plurality of images; the average is determined as the image template.
In some optional implementations of the embodiment, the plurality of images and the biometric image to be encrypted are images of the same category.
In the image processing apparatus for protecting privacy provided by this embodiment, the obtaining unit obtains a mixed image corresponding to a biometric image to be encrypted, where the mixed image is obtained by performing weighted summation on the biometric image and a predetermined number of other images, the predetermined number of other images act as noise images, and the sum of weights thereof is smaller than a preset threshold, and then the encrypting unit subtracts a predetermined image template from the mixed image to obtain an encrypted image corresponding to the biometric image. In addition, by applying the predetermined number of other images as noise images and limiting the sum of the weights thereof to be smaller than a preset threshold value, the encrypted images can be algorithmically recognized and thus can be used for subsequent model training.
With further reference to fig. 8, as an implementation of the methods shown in some of the above figures, the present specification provides an embodiment of a privacy-preserving model training apparatus, which corresponds to the embodiment of the method shown in fig. 5, and which can be applied to the model training terminal 101 shown in fig. 1.
As shown in fig. 8, the privacy protection model training apparatus 800 of the present embodiment includes: an acquisition unit 801 and a training unit 802. The obtaining unit 801 is configured to obtain a training sample set, where a training sample includes an encrypted image and a label, the encrypted image is obtained by subtracting a predetermined image template from a corresponding mixed image, the mixed image is obtained by weighted summation of a biometric image to be encrypted and a predetermined number of other images, and the label is a label of the biometric image; the training unit 802 is configured to train the deep learning model to be trained according to the training sample set, resulting in a biometric recognition model.
In this embodiment, the detailed processing of the obtaining unit 801 and the training unit 802 and the technical effects brought by the detailed processing may refer to the related descriptions of step 501 and step 502 in the corresponding embodiment of fig. 5, which are not repeated herein.
In some optional implementations of the embodiment, the biometric image to be encrypted may be any one of a predetermined set of biometric images, and the predetermined number of other images may be images other than the biometric image to be encrypted in the set of biometric images.
In some optional implementations of this embodiment, the biometric image to be encrypted may include any one of the following: face images, fingerprint images, finger vein images, iris images, voice print images, palm print images, and the like.
In some optional implementations of this embodiment, the image template may be an average value between a plurality of images.
In some optional implementations of the embodiment, the plurality of images and the biometric image to be encrypted may be images of the same category.
According to the model training device for protecting privacy, the training sample set is obtained through the obtaining unit, the training sample comprises the encrypted image and the label, the encrypted image is obtained by subtracting the preset image template from the corresponding mixed image, the mixed image is obtained by weighting and summing the corresponding biological feature image to be encrypted and the preset number of other images, the label is the label of the biological feature image, then the model training unit trains the deep learning model to be trained according to the training sample set to obtain the biological feature recognition model, and model training based on the encrypted image is achieved.
Embodiments of the present specification further provide a computer-readable storage medium on which a computer program is stored, wherein when the computer program is executed in a computer, the computer is caused to execute the privacy-preserving image processing method or the privacy-preserving model training method respectively shown in the above method embodiments.
The embodiment of the present specification further provides a computing device, which includes a memory and a processor, where the memory stores executable code, and the processor executes the executable code to implement the privacy-protecting image processing method or the privacy-protecting model training method shown in the above method embodiments, respectively.
The present specification also provides a computer program product, which when executed on a data processing apparatus, causes the data processing apparatus to implement the privacy-preserving image processing method or the privacy-preserving model training method respectively shown in the above method embodiments.
Those skilled in the art will recognize that the functionality described in the various embodiments disclosed herein may be implemented in hardware, software, firmware, or any combination thereof, in one or more of the examples described above. When implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium.
In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The above-mentioned embodiments, objects, technical solutions and advantages of the embodiments disclosed in the present specification are further described in detail, it should be understood that the above-mentioned embodiments are only specific embodiments of the embodiments disclosed in the present specification, and are not intended to limit the scope of the embodiments disclosed in the present specification, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the embodiments disclosed in the present specification should be included in the scope of the embodiments disclosed in the present specification.