CN115563631A - Privacy protection federal learning method based on generation countermeasure image transformation - Google Patents
Privacy protection federal learning method based on generation countermeasure image transformation Download PDFInfo
- Publication number
- CN115563631A CN115563631A CN202211179790.5A CN202211179790A CN115563631A CN 115563631 A CN115563631 A CN 115563631A CN 202211179790 A CN202211179790 A CN 202211179790A CN 115563631 A CN115563631 A CN 115563631A
- Authority
- CN
- China
- Prior art keywords
- local
- image
- model
- batch normalization
- layer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/602—Providing cryptographic facilities or services
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/62—Protecting access to data via a platform, e.g. using keys or access control rules
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Bioethics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Hardware Design (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Computer Security & Cryptography (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a privacy protection federal learning method based on generation of confrontation image transformation, which comprises the following steps: step 11, sharing and generating a confrontation image transformation model and a local model structure by a local user and initializing parameters; step 12, training a local data set to generate a confrontation image transformation model, converting an original image of the local data set into an encrypted image by using the confrontation image transformation model, and keeping an image category label; step 13, training and updating the local model by using the encrypted image and the corresponding class label, and uploading the encrypted gradient of the trained local model to a server; step 14, the server aggregates the encryption gradients uploaded by the local users to update the global model; and step 15, downloading the updated global model from the server by the local user, verifying whether the accuracy of the global model is greater than a preset value by using the local encryption image set, finishing the federal learning training process for privacy protection if the accuracy of the global model is greater than the preset value, and repeatedly executing the steps 13 and 14 if the accuracy of the global model is not greater than the preset value. The method well balances the accuracy of the federal learning global model and the privacy protection performance of the user data.
Description
Technical Field
The invention relates to the field of federal learning, in particular to a privacy protection federal learning method based on generation of confrontation image transformation.
Background
Federal learning allows a plurality of local users to jointly train a global model in a mode of sharing parameters with a server, so that the phenomenon that the server directly shares original data in a local data set is avoided, and privacy information contained in local user data is protected to a certain extent. However, the existing gradient-oriented reconstruction attack method can reconstruct the original data in the local data set with high quality by analyzing the local model gradient as a shared parameter, and brings great challenges to the privacy protection of local user data.
In view of the above, the present invention is particularly proposed.
Disclosure of Invention
The invention aims to provide a privacy protection federal learning method based on generation countermeasure image transformation, which can prevent original data in local data set from being reconstructed by analyzing local model gradient serving as a sharing parameter, protect the privacy of local user data and further solve the technical problems in the prior art.
The purpose of the invention is realized by the following technical scheme:
a privacy protection federal learning method based on generation countermeasure image transformation comprises the following steps:
and step 16, finishing the privacy protection federal learning training process.
Compared with the prior art, the privacy protection federal learning method based on the generation of the confrontation image transformation has the beneficial effects that:
the method comprises the steps of generating a network structure of an anti-image transformation model and a local model through local user sharing, generating the anti-image transformation model based on local data set training, transforming original images in the local data set into encrypted images for federal learning training, and keeping training characteristics of the original images in the encrypted images, so that the accuracy of the federal learning global model and the privacy protection performance of user data are well balanced, and the problem of privacy disclosure of the local user image data in the process of the federal learning training is well solved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a privacy protection federal learning method based on a generation countermeasure image transformation according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of image privacy protection performance against a gradient-based reconstruction attack according to an embodiment of the present invention; wherein, (a) is an original image, and the original image comprises an image of a frog, an image of a deer, an image of a ship, an image of a bird, an image of a cat and an image of a car from left to right; (b) A reconstructed image without using a privacy protection method for each original image; (c) Reconstructing an image for each original image using a block scrambling based privacy preserving federal learning method; (d) The reconstructed image is obtained by using a privacy protection federal learning method based on pixel transformation and channel scrambling for each original image; (e) Reconstructing images for each original image using a privacy preserving federal learning method based on an automatic transformation strategy; (f) The reconstructed image based on the privacy protection federal learning method for generating the antagonistic image transformation proposed by the invention is used for each original image.
Detailed Description
The technical scheme in the embodiment of the invention is clearly and completely described below by combining the specific content of the invention; it is to be understood that the described embodiments are merely exemplary of the invention, and are not intended to limit the invention to the particular forms disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
The terms that may be used herein are first described as follows:
the term "and/or" means that either or both can be achieved, for example, X and/or Y means that both cases include "X" or "Y" as well as three cases including "X and Y".
The terms "comprising," "including," "containing," "having," or other similar terms of meaning should be construed as non-exclusive inclusions. For example: including a feature (e.g., material, component, ingredient, carrier, formulation, material, dimension, part, component, mechanism, device, process, procedure, method, reaction condition, processing condition, parameter, algorithm, signal, data, product, or article of manufacture), is to be construed as including not only the particular feature explicitly listed but also other features not explicitly listed as such which are known in the art.
The term "consisting of 823070 \8230composition" means to exclude any technical characteristic elements not explicitly listed. If used in a claim, the term shall render the claim closed except for the usual impurities associated therewith which do not include the technical features other than those explicitly listed. If the term occurs in only one clause of the claims, it is defined only to the elements explicitly recited in that clause, and elements recited in other clauses are not excluded from the overall claims.
Unless expressly stated or limited otherwise, the terms "mounted," "connected," and "secured," etc., are to be construed broadly, as for example: can be fixedly connected, can also be detachably connected or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms herein can be understood by those of ordinary skill in the art as appropriate.
The terms "central," "longitudinal," "lateral," "length," "width," "thickness," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," "clockwise," "counterclockwise," and the like are used in an orientation or positional relationship that is indicated based on the orientation or positional relationship shown in the drawings for ease of description and simplicity of description only, and are not meant to imply or imply that the device or element so referred to must have a particular orientation, be constructed in a particular orientation, and be operated in a particular manner and therefore are not to be construed as limiting herein.
The following describes a privacy-preserving federal learning method based on a generative antagonistic image transformation proposed by the present invention in detail. Details which are not described in detail in the embodiments of the invention belong to the prior art which is known to a person skilled in the art. Those not specifically mentioned in the examples of the present invention were carried out according to the conventional conditions in the art or conditions suggested by the manufacturer. The reagents and instruments used in the examples of the present invention are not specified by manufacturers, and are conventional products commercially available.
As shown in fig. 1, a privacy protection federal learning method based on a generation countermeasure image transformation provided by an embodiment of the present invention includes the following steps:
and step 16, finishing the privacy protection federal learning training process.
In step 11 of the method, the local user initializes model parameters for generating the resist image transformation model and the local model according to the gaussian distribution.
In step 11 of the method, generating a network structure of a countering image transformation model includes:
encrypting a network and judging the network;
the encryption network comprises a first convolution layer with a core size of 4, a first batch normalization layer, a second convolution layer with a core size of 4, a second batch normalization layer, a third convolution layer with a core size of 4, a third batch normalization layer, a fourth convolution layer with a core size of 4, a fourth batch normalization layer, a first transposition convolution layer with a core size of 4, a fifth batch normalization layer, a second transposition convolution layer with a core size of 4, a sixth batch normalization layer, a third transposition convolution layer with a core size of 4, a seventh batch normalization layer and a fourth transposition convolution layer with a core size of 4 which are connected in sequence;
the activation functions of the first batch normalization layer, the second batch normalization layer, the third batch normalization layer, the fourth batch normalization layer, the fifth batch normalization layer, the sixth batch normalization layer and the seventh batch normalization layer are all ReLU functions;
the activation function after the fourth transposed convolutional layer is a Tanh function;
the input of the first convolution layer of the encryption network is a 32 multiplied by 32 characteristic diagram with the channel number of 3, and one path of output of Tanh is provided after the fourth transposition convolution layer;
the discrimination network comprises a first convolution layer with the core size of 4, a first batch normalization layer, a second convolution layer with the core size of 4, a second batch normalization layer, a third convolution layer with the core size of 4, a third batch normalization layer and a fourth convolution layer with the core size of 4 which are connected in sequence;
the third batch normalization layer is connected with a full connection layer;
the activation functions after the first batch normalization layer, the second batch normalization layer and the third batch normalization layer are all ReLU functions;
the activation function after the fourth convolution layer is a Sigmoid function;
the first convolutional layer input of the discrimination network is a 32 x 32 feature map with the number of channels being 3, the fourth convolutional layer is followed by Sigmoid output and the third batch normalization layer is followed by full connection layer output.
In step 11 of the method, the network structure of the local model includes:
the first convolution layer with the core size of 4, the first batch normalization layer, the second convolution layer with the core size of 4, the second batch normalization layer, the third convolution layer with the core size of 4, the third batch normalization layer and the full-connection layer are connected in sequence;
the activation functions after the first batch normalization layer, the second batch normalization layer and the third batch normalization layer are all ReLU functions;
the first convolution layer input of the local model is a 32 multiplied by 32 characteristic diagram with the channel number being 3, and the third batch normalization layer has one output of the full connection layer.
In step 12 of the method, a local user trains a generation countermeasure image transformation model based on a local data set in the following manner, and transforms an original image in the local data set into an encrypted image using the trained generation countermeasure image transformation model while maintaining a category label of the encrypted image, including:
step 121, the local user constructs the original image in the local data set owned by the local user as an original image set x, and performs blocking and block scrambling operations on all images in the original image set to obtain a target image set
Step 122, the local user defines the generative rival loss function in the generative rival image transformation modelComprises the following steps:
wherein G represents an encrypted network; d represents a discrimination network; x represents the original image in the original image set xLike the (c) image of the (a),representing a set of target imagesThe transformed image of (1); at the generation of the penalty-fighting functionIn the method, the output of the network D is judged to be Sigmoid output, and the original image generates a conversion image through an encryption network.
Step 123, local user definition generates classification loss function in the confrontation image transformation modelComprises the following steps:
wherein m represents the number of original images in the original image set x; c represents the number of original image categories in the original image set x; y represents a set of true category labels for the original image in the local dataset;representing a prediction category label set obtained by the output of an original image through a full connection layer of a discrimination network; y is i (j) A true category label indicating that the transformed image corresponds to the j-th category;indicating that the transformed image corresponds to a prediction class label of class j.
Step 124, the local user uses the original image set x and the target image setTrue category label set y and predicted category label setAlternately training the encryption network and the discrimination network, expressed as:
wherein, the encryption network and the discrimination network need to be trained alternately until the number of updating rounds of the conversion model of the generated confrontation image reaches the preset threshold value T, and the original image x generates the encrypted image through the trained encryption networkThe encrypted image is the final state of the transformed image after the training of the encryption network and the discrimination network is completed.
In step 13 of the method, a local user trains a local model based on an encrypted image and a corresponding encrypted image category label in the following manner, completes updating of the local model, and uses an encrypted gradient of the trained local model as a sharing parameter for uploading to a server, including:
for a total of K local users C k K is more than or equal to 1 and less than or equal to K, and each local user C executes the local model updating of the federal learning k Having an inclusion of n k Local data set D of original image k K is more than or equal to 1 and less than or equal to K, and the local model update is expressed as:
wherein, w k A local model obtained by downloading global model parameters from the server for the kth local user;a local encrypted image set for a kth local user;by encrypting the imageAnd a true category label y k,i Forming; f k (w k ) Transforming a model for generating a classification loss function in a robust imageAnd isLocal model encryption gradient finished by kth local user trainingAs a shared parameter for uploading to the server.
In step 14 of the method, the global model update is represented as:
wherein, w (t) Is the global model parameter at the time of the t-th global communication; w is a (t+1) Is the global model parameter at the t +1 th global communication; η is the global model update learning rate;is the number of encrypted images for all local users;is the local model encryption gradient of the kth local user at the time of the t-th global communication.
In step 15 of the method, when the local user performs the t +1 th global communication with the server, all the local users download the updated global model w from the server (t+1) All users use locally encrypted image collectionsVerifying globalModel w (t+1) When the accuracy of the global model obtained by verification of all users is greater than a preset threshold, executing step 16, namely completing the privacy protection federal learning training process, otherwise, repeatedly executing step 13 and step 14.
In summary, according to the privacy protection federal learning method based on the generation countermeasure image transformation, provided by the embodiment of the invention, the generation countermeasure image transformation model composed of the encryption network and the judgment network is constructed, the original images in the local data set are transformed into the encryption images for the federal learning training, and the training characteristics of the original images are kept in the encryption images, so that a better balance is obtained between the accuracy of the federal learning global model and the privacy protection performance of the user data.
In order to more clearly show the technical solutions proposed by the present invention and the technical effects produced thereby, the following describes in detail a privacy protection federal learning method based on generation of a countermeasure image transformation provided by an embodiment of the present invention with specific embodiments.
Example 1
The privacy protection federal learning method based on the generation countermeasure image transformation, provided by the embodiment of the invention, has a better balance between the accuracy of a global model for federal learning and the privacy protection performance of user data, and as shown in fig. 1, the method mainly comprises the following steps:
wherein generating the antagonistic image transformation model comprises: and the local user initializes and generates a confrontation image transformation model and model parameters of the local model according to Gaussian distribution. Table 1 shows a network structure for generating a confrontational image transformation model in which the inputs to the encryption network and the discrimination network are color images of size 32 × 32;
table 2 shows the network structure of the local model, the input to which is a color image of size 32 × 32. In tables 1 and 2, conv is a convolution operation with a kernel size of 4 and Deconv is a kernel sizeTransposition convolution operation of 4, BN batch normalization operation, FC full connection layer operation, reLU, sigmoid and Tanh activation functions in deep learning, m 2 And (4) multiplying n is an m multiplying m characteristic diagram with the number of channels being n, and the two-way output of the network with Sigmoid and FC is judged.
TABLE 1 network architecture for generating a resist image transformation model
the preferred embodiment of this step 12 is as follows:
step 121, the local user constructs the original image in the local data set owned by the local user as an original image set x, and performs blocking and block scrambling operations on all images in the original image set to obtain a target image set
Step 122, local user defines the generative confrontation loss function in the generative confrontation image transformation modelExpressed as:
wherein G represents an encryption network, D represents a discrimination network, x represents an image in an original image set x,representing a set of target imagesThe image of (1). In generating the function of the penalty of confrontationIn (3), the output of the network D is determined to be a Sigmoid output. The original image may be passed through an encrypted network to generate a transformed image.
Step 123, local user definition generates classification loss function in the confrontation image transformation modelExpressed as:
where m represents the number of images in the original image set x, c represents the number of image classes in the original image set x, y represents the set of true class labels for the original images in the local data set,representing a set of predicted class labels, y, obtained from the FC output of the original image through the discriminating network i (j) Indicating that the transformed image corresponds to a genuine category label of the jth class,indicating that the transformed image corresponds to a prediction class label of class j.
Step 124, the local user uses the original image set x and the target image setTrue class tag set y and predicted class tag setAlternately training the encryption network and the discrimination network, expressed as:
wherein, the encryption network and the discrimination network need to be trained alternately until the number of updating rounds of the generated confrontation image transformation model reaches the threshold value T, and the original image x generates the encrypted image through the trained encryption networkThe encrypted image is the final state of the transformed image after the training of the encryption network and the discrimination network is completed.
in step 13, for a total of K local users C k K is more than or equal to 1 and less than or equal to K, and each local user C executes the local model updating of the federal learning k Having an inclusion of n k Local data set D of original image k K is more than or equal to 1 and less than or equal to K, and the local model update is expressed as:
wherein w k A local model obtained by downloading global model parameters from the server for the kth local user,for the local encrypted image set of the kth local user,by encrypting the imageAnd a genuine category label y k,i Constitution F k (w k ) Transforming a model for generating classification loss functions in a robust imageNumber ofAnd is provided withLocal model encryption gradient finished by training of kth local userAs a shared parameter for uploading to the server.
in step 14, the global model update is represented as:
wherein, w (t) Is the global model parameter at the time of the t-th global communication, w (t+1) Is the global model parameter at the t +1 th global communication, η is the global model update learning rate,is the number of encrypted images for all local users,is the local model encryption gradient for the kth local user at the time of the t-th global communication.
in step 15, when all local users perform t +1 th global communication with the server, all local users download the updated global model w from the server (t+1) All users use locally encrypted image collectionsValidating a Global model w (t+1) If the accuracy of (2) is greater than the preset threshold, if so, executing step 16, otherwise, repeatedly executing step 13 and step 14;
and step 16, finishing the privacy protection federal learning training process.
According to the privacy protection federal learning method based on the generation countermeasure image transformation, the generation countermeasure image transformation model composed of the encryption network and the judgment network is constructed, the original images in the local data set are transformed into the encryption images used for the federal learning training, and the training characteristics of the original images are kept in the encryption images, so that the better balance between the accuracy of the federal learning global model and the privacy protection performance of user data is obtained.
In order to test the global model accuracy and the privacy protection performance of the privacy protection federal learning method based on the generation of the antagonistic image transformation, the privacy protection federal learning method provided by the invention is compared with other three privacy protection federal learning methods. The three comparative privacy protection federal learning methods are respectively marked as A, B and C. The method comprises the following steps of A, B, C and C, wherein A is a privacy protection federal learning method based on block scrambling, B is a privacy protection federal learning method based on pixel transformation and channel scrambling, and C is a privacy protection federal learning method based on an automatic transformation strategy.
When testing the accuracy and privacy protection performance of the global model, the used data sets are MNIST and CIFAR-10, and random gradient descent is used as a basic model optimization algorithm. Meanwhile, the number of local users is set to 5, the number of global communication rounds is set to 3, the number of local model update rounds is set to 10, the size of a randomly sampled data set is set to 64, and the learning rate is set to 0.01. In order to test the privacy protection performance, the server steals local user data by using reconstruction attack based on gradient, and measures the privacy protection performance of a corresponding method by using a reconstructed image peak signal-to-noise ratio calculated by referring to an original image.
TABLE 3 Global model accuracy for different privacy preserving federated learning methods
Table 3 shows the global model accuracy for different privacy preserving federal learning methods. The privacy protection federal learning method based on the generation countermeasure image transformation can obtain the highest global model accuracy. Fig. 2 shows a graph of image privacy protection performance against a gradient-based reconstruction attack. The peak signal-to-noise ratio of the reconstructed image of the privacy protection federal learning method based on generation of the antagonistic image transformation is the lowest, which shows that the method has the highest privacy protection performance.
Those of ordinary skill in the art will understand that: all or part of the processes in the method for implementing the embodiments may be implemented by a program, which may be stored in a computer-readable storage medium, and when executed, may include the processes of the embodiments of the methods. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims. The information disclosed in this background section is only for enhancement of understanding of the general background of the invention and should not be taken as an acknowledgement or any form of suggestion that this information forms the prior art already known to a person skilled in the art.
Claims (8)
1. A privacy protection federal learning method based on generation countermeasure image transformation is characterized by comprising the following steps:
step 11, sharing a network structure for generating a confrontation image transformation model and a local model by a local user and initializing model parameters of the two models;
step 12, a local user trains and generates a confrontation image transformation model based on a local data set, and transforms an original image in the local data set into an encrypted image by using the trained generated confrontation image transformation model, and simultaneously keeps the class label of the encrypted image consistent with the class label of the original image;
step 13, the local user trains a local model based on the encrypted image and the corresponding encrypted image category label to complete local model updating, and uploads the trained local model encryption gradient as a sharing parameter to the server;
step 14, the server aggregates encryption gradients uploaded by local users participating in global model updating to complete global model updating;
step 15, the local user and the server carry out global communication, an updated global model is downloaded from the server, and the local encryption image set is used for verifying whether the accuracy of the global model is greater than a preset threshold value, if so, step 16 is executed, and if not, step 13 and step 14 are repeatedly executed;
and step 16, finishing the privacy protection federal learning training process.
2. The privacy protection federal learning method based on generation of a confrontational image transformation as claimed in claim 1, wherein in step 11, the local user initializes the model parameters for generating the confrontational image transformation model and the local model according to gaussian distribution.
3. The privacy protection federal learning method based on generation of a countermeasure image transformation as claimed in claim 1 or 2, wherein in the step 11, the network structure for generating the countermeasure image transformation model comprises:
encrypting a network and judging the network; wherein, the first and the second end of the pipe are connected with each other,
the encryption network comprises a first convolution layer with a core size of 4, a first batch normalization layer, a second convolution layer with a core size of 4, a second batch normalization layer, a third convolution layer with a core size of 4, a third batch normalization layer, a fourth convolution layer with a core size of 4, a fourth batch normalization layer, a first transposition convolution layer with a core size of 4, a fifth batch normalization layer, a second transposition convolution layer with a core size of 4, a sixth batch normalization layer, a third transposition convolution layer with a core size of 4, a seventh batch normalization layer and a fourth transposition convolution layer with a core size of 4 which are connected in sequence;
the activation functions of the first batch normalization layer, the second batch normalization layer, the third batch normalization layer, the fourth batch normalization layer, the fifth batch normalization layer, the sixth batch normalization layer and the seventh batch normalization layer are ReLU functions;
the activation function after the fourth transposed convolutional layer is a Tanh function;
the input of the first convolution layer of the encryption network is a 32 multiplied by 32 characteristic diagram with the channel number of 3, and one path of output of Tanh is provided after the fourth transposition convolution layer;
the discrimination network comprises a first convolution layer with the core size of 4, a first batch normalization layer, a second convolution layer with the core size of 4, a second batch normalization layer, a third convolution layer with the core size of 4, a third batch normalization layer and a fourth convolution layer with the core size of 4 which are connected in sequence;
the third batch normalization layer is connected with a full connection layer;
the activation functions after the first batch normalization layer, the second batch normalization layer and the third batch normalization layer are all ReLU functions;
the activation function after the fourth convolution layer is a Sigmoid function;
the first convolutional layer input of the discrimination network is a 32 x 32 feature map with the number of channels being 3, the fourth convolutional layer is followed by Sigmoid output and the third batch normalization layer is followed by full connection layer output.
4. The privacy-preserving federal learning method based on generation of a confrontational image transformation as claimed in claim 1 or 2, wherein in the step 11, the network structure of the local model comprises:
the first convolution layer with the core size of 4, the first batch normalization layer, the second convolution layer with the core size of 4, the second batch normalization layer, the third convolution layer with the core size of 4, the third batch normalization layer and the full-connection layer are connected in sequence;
the activation functions after the first batch normalization layer, the second batch normalization layer and the third batch normalization layer are all ReLU functions;
the first convolution layer input of the local model is a 32 x 32 characteristic diagram with the channel number of 3, and after the third batch normalization layer, one output of the full connection layer is provided.
5. The privacy protection federal learning method for being based on generation of a confrontational image transformation as claimed in claim 1 or 2, wherein in the step 12, the local user trains a generation confrontational image transformation model based on the local data set in the following way, and transforms the original image in the local data set into the encrypted image by using the trained generation confrontational image transformation model while keeping the class label of the encrypted image, comprising:
step 121, the local user constructs the original image in the local data set owned by the local user as an original image set x, and performs blocking and block scrambling operations on all images in the original image set to obtain a target image set
Step 122, local user defines the generative confrontation loss function in the generative confrontation image transformation modelComprises the following steps:
wherein G represents an encryption network, D represents a discrimination network, and x representsThe images in the original set of images x,representing a set of target imagesThe image of (1); in generating the function of the penalty of confrontationJudging that the output of the network D is Sigmoid output; the original image can generate a transformation image through an encryption network;
step 123, local user definition generates classification loss function in the confrontation image transformation modelExpressed as:
wherein m represents the number of original images in the original image set x; c represents the number of original image categories in the original image set x; y represents a set of true category labels of the original image in the local dataset;representing a prediction category label set obtained by the output of an original image through a full connection layer of a discrimination network; y is i (j) A true category label indicating that the transformed image corresponds to the j-th category;a prediction class label indicating that the transformed image corresponds to the j-th class;
step 124, the local user uses the original image set x and the target image setTrue class tag set y and predicted class tag setAlternately training the encryption network and the discrimination network, expressed as:
wherein, the encrypted network and the discrimination network need to be trained alternately until the updating round number of the generated confrontation image transformation model reaches the threshold value T, and the original image x generates the encrypted image through the trained encrypted networkThe encrypted image is the final state of the transformed image after the training of the encryption network and the discrimination network is completed.
6. The privacy protection federal learning method based on generation of antagonistic image transformation as claimed in claim 1 or 2, wherein in step 13, the local user trains the local model based on the encrypted image and the corresponding encrypted image class label in the following manner, completes the local model update, and uses the trained local model encryption gradient as the sharing parameter for uploading to the server, including:
for a total of K local users C k K is more than or equal to 1 and less than or equal to K, and each local user C executes the local model updating of the federal learning k Having an inclusion of n k Local data set D of original image k K is more than or equal to 1 and less than or equal to K, and the local model update is expressed as:
wherein, w k Local model obtained by downloading global model parameters from server for kth local userMolding;a local encrypted image set for a kth local user;by encrypting the imageAnd a genuine category label y k,i Forming; f k (w k ) Transforming a model for generating a classification loss function in a robust imageAnd isLocal model encryption gradient finished by training of kth local userAs a shared parameter for uploading to the server.
7. A privacy preserving federal learning method as claimed in claim 1 or 2, wherein in the step 14, the global model update is expressed as:
wherein, w (t) Is the global model parameter at the time of the t-th global communication; w is a (t+1) Is the global model parameter at the t +1 th global communication; η is the global model update learning rate;is an encrypted image of all local usersThe number of the cells;is the local model encryption gradient of the kth local user at the time of the t-th global communication.
8. The method for privacy-preserving federal learning based on generation of antagonistic image transformation as claimed in claim 1 or 2, wherein in step 15, when the local users have t +1 th global communication with the server, all local users download the updated global model from the server as w (t+1) All users use locally encrypted image collectionsValidating a Global model w (t+1) When the accuracy of the global model obtained by the verification of all users is greater than the preset threshold, step 16 is executed, otherwise, step 13 and step 14 are repeatedly executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211179790.5A CN115563631A (en) | 2022-09-27 | 2022-09-27 | Privacy protection federal learning method based on generation countermeasure image transformation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211179790.5A CN115563631A (en) | 2022-09-27 | 2022-09-27 | Privacy protection federal learning method based on generation countermeasure image transformation |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115563631A true CN115563631A (en) | 2023-01-03 |
Family
ID=84743882
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211179790.5A Pending CN115563631A (en) | 2022-09-27 | 2022-09-27 | Privacy protection federal learning method based on generation countermeasure image transformation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115563631A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116451276A (en) * | 2023-06-15 | 2023-07-18 | 杭州海康威视数字技术股份有限公司 | Image processing method, device, equipment and system |
-
2022
- 2022-09-27 CN CN202211179790.5A patent/CN115563631A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116451276A (en) * | 2023-06-15 | 2023-07-18 | 杭州海康威视数字技术股份有限公司 | Image processing method, device, equipment and system |
CN116451276B (en) * | 2023-06-15 | 2023-09-26 | 杭州海康威视数字技术股份有限公司 | Image processing method, device, equipment and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DeVries et al. | Learning confidence for out-of-distribution detection in neural networks | |
CN111563841A (en) | High-resolution image generation method based on generation countermeasure network | |
CN111753881B (en) | Concept sensitivity-based quantitative recognition defending method against attacks | |
CN107704877A (en) | A kind of image privacy cognitive method based on deep learning | |
CN108960304B (en) | Deep learning detection method for network transaction fraud behaviors | |
CN109712165A (en) | A kind of similar foreground picture image set dividing method based on convolutional neural networks | |
CN111582225A (en) | Remote sensing image scene classification method and device | |
CN112115967B (en) | Image increment learning method based on data protection | |
Chen et al. | Automated design of neural network architectures with reinforcement learning for detection of global manipulations | |
CN111696046A (en) | Watermark removing method and device based on generating type countermeasure network | |
CN114038055A (en) | Image generation method based on contrast learning and generation countermeasure network | |
CN111507386A (en) | Method and system for detecting encrypted communication of storage file and network data stream | |
CN115563631A (en) | Privacy protection federal learning method based on generation countermeasure image transformation | |
CN114339258A (en) | Information steganography method and device based on video carrier | |
CN112241741A (en) | Self-adaptive image attribute editing model and method based on classified countermeasure network | |
CN113255832B (en) | Method for identifying long tail distribution of double-branch multi-center | |
CN112560034B (en) | Malicious code sample synthesis method and device based on feedback type deep countermeasure network | |
WO2003073381A1 (en) | Pattern feature selection method, classification method, judgment method, program, and device | |
KR20220094967A (en) | Method and system for federated learning of artificial intelligence for diagnosis of depression | |
CN114202397B (en) | Longitudinal federal learning backdoor defense method based on neuron activation value clustering | |
Tahmid et al. | Comparative analysis of generative adversarial networks and their variants | |
KR102018788B1 (en) | Image classfication system and mehtod | |
CN113837360B (en) | DNN robust model reinforcement method based on relational graph | |
CN113962913B (en) | Construction method of deep mutual learning framework integrating spectral space information | |
CN117454187B (en) | Integrated model training method based on frequency domain limiting target attack |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |